1. All storage disk subsystems have a certain number of native FC ports. What is the right number of ports on the...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
disk subsystem if I have eight servers attached to the storage via two 2GB/sec switches (four servers per switch)? Do I need to increase the number of native FC ports if I choose to attach more servers, or does it all depend on the amount of data traffic?
2. Is the Fibre Channel bandwidth shared amongst the ports?
3. In what terms are the FC switches benchmarked? Are the results available in public domain?
1. Storage arrays come in many flavors. The modular solutions usually come with four Fibre Channel ports (two per controller) although there are some that have eight ports. The monolithic arrays, like the ones from HDS, EMC and IBM can have up to 64 native Fibre Channel ports. The solution you choose to use will depend on your budget, availability requirements and connection needs. If you have a lot of hosts to connect up, you can either use multiple modular arrays, or buy a big box and consolidate everything into it.
If you follow the general rules put forth by the switch vendors, then your connection ratio (also called a fan in ratio) should be around 7:1 for 1Gbit Fibre Channel and 14:1 for 2Gbit Fibre Channel. This of course is just a general guideline. It all depends on the performance requirements of the servers attached to the ports. If you are sharing one storage port with seven servers and want to run a backup job from all the servers at the same time, you will over extend the port and there will be contention for the resources of that port. If you servers are nothing more than basic file/print servers and the client load is spread out evenly between those servers, then you may be able to get away with having more than seven hosts per port. The actual physical limit on Fibre based storage arrays is 128 nodes per port, although your applications performance needs would have to be looked at before even attempting that kind of connectivity.
The whole idea is to avoid "over subscription" of your storage ports, and even your inter-switch links for that matter. Say you have two switches connected together via an ISL. You have many high performance servers on switch 1 but your storage ports are connected to switch 2. If you only have one ISL, the ISL will be over utilized and therefore over subscribed. One solution to this problem is to add storage ports to the same switch that the hosts are connected too or use switch port "trunking." On Brocade, you can trunk up to four ports together and the trunk will spread the I/O load across all four ports. This is different than just connecting four separate ISLs between the switches, as the trunk will provide better load balancing and transparent failover if one of the cables goes bad.
2. On a switch, each connection through the switch between ports is point to point, meaning each port has its own bandwidth. The bandwidth of a switch is the multiplication of how many ports it has. A 16-port, 1Gbit switch will have an effective bandwidth of 1.6GB per second. If you use 15 servers and one storage port on the switch though, then all 15 servers will be trying to get to the same storage port and thus the bandwidth would be shared to that port. Add more storage ports and less servers to the switch to fix this problem.
3. I don't know of a specific benchmark for the switches themselves. There are a lot of factors that come into play here. You can usually bet though, that a director class switch with efficient firmware and fast backplane will outperform a 16-port switch. Go to each vendor's Web site and get the specs on the individual switches or test it out yourself if you can get to the SNIA lab out in Colorado.
Editor's note: Do you agree with this expert's response? If you have more to share, post it in one of our .bphAaR2qhqA^0@/searchstorage>discussion forums.
Dig Deeper on SAN technology and arrays
Related Q&A from Christopher Poelker
SAN expert Chris Poelker compares connecting a SAN with wavelength cabling and dark fiber and discusses the pros and cons of each.continue reading
SAN expert Chris Poelker discusses how to change the size of a LUN in a Microsoft cluster server environment.continue reading
Storage expert Chris Poelker discusses SATA/SCSI compatibility issues in this expert advice article.continue reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.