By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
I'm trying to get my head around an element of the switch vs. director argument. It's claimed that a director can guarantee 100 Mbps between any attached host and its SAN based disk. If I had a Fan ratio of 20 FC connections between hosts and a director and 12 between disks and the director, is the claim that all 20 hosts can demand 100 Mbps simultaneously?
If so, please explain how this is possible (with the 12-20 ratio in mind), if not, please enlighten me. Also, my understanding is that a switch employs looping algorithms to handle heavy load traffic, is this the case and if so, how then does this traffic flow control differ from that in a standard hub?
Basically, any explanation on the actual flow of traffic through Directors, switches and hubs will be appreciated. Thanks in advance.
Great question(s) -- a juicy topic to be sure!Lets take the easier question first about loops and switches. I'll have another response for the switch-specific questions.
Switches can implement technology that allows them to interact with loops. The standard for doing this is called "public loop." There can be only one active public loop switch connection for a loop, referred to as an FL-port. Additional switch ports can also be configured as FL-ports on the loop but only as inactive standby connections. The FL-port on a switch works with any NL-port node on the loop to provide connectivity between fabric resident nodes and loop resident nodes. (NL ports on loops can perform fabric primitives such as switch port login). Loop nodes that are L-port only (commonly referred to as private loop) cannot communicate through the switch's FL-port because they cannot perform fabric primitives.
NL ports in loops are added to the switch's name service and address tables for the purposes of locating nodes and routing traffic.
FL-ports are defined as loop masters. For public loop enabled networks, the first checked for in the loop initialization procedure is the presence of an FL port. This is one of the reasons why there can only be one public loop switch port on a loop.
The public loop function on a switch works much the same way that a switch does but it has to accommodate loop arbitration. In the case of loop to fabric transfers, the sending node must arbitrate for control of the loop and then establish communications with fabric nodes using the various switch services and login steps. In the case of fabric to loop transfers, the switch must arbitrate for access to the loop on behalf of the sending node before the end-to-end node and process login can be established. Once that is done, data can begin to flow.
The congestion caused by this in the switch is not a big issue as far as I know. Instead the problem is the potential bottleneck posed by having a single switch port used by multiple sessions. Let's say you have a SAN with 5 fabric-attached servers and 10 loop attached disk subsystems where all storage traffic is processed through the public loop port. There is obviously going to be a problem with this design. It would be much better to have 5 or more loops established to reduce contention for the FL port.
Fibre Channel's flow control works the same for both fabrics and loops. The most common class of service uses link level flow control. When a sending node is out of credits, it cannot send more data into the network. So, if the credit allocation is done correctly for FL ports on switches, there should not be problems with data flowing from fabrics into loops. However, it is quite a bit more complicated for the situation for traffic flows from loops to fabrics. Head of line blocking for traffic moving from multiple loop nodes to fabrics could be nasty. It is possible that a loop node would win arbitration for the loop only to discover they cannot send data through the FL-port because its buffers are already full over another nodes data that can't be delivered for some reason.
The other type of loop to fabric connection is called private loop. The discussions above do not apply. Any number of methods can be used to accommodate loop traffic over fabrics. As they like to say -- it's a simple matter of programming. Private loop is vendor-specific technology and you need to check this out on a vendor-by-vendor basis to find out how it works and identify where the bottlenecks would be.
Editor's note: For Part 2 of Marc's answer, go to http://www.searchStorage.com/ateQuestionNResponse/0,289625,sid5_cid400769_tax286191,00.html
Dig Deeper on SAN switch
Related Q&A from Marc Farley
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.