Can you help us understand how performance of iSCSI fares against performance of Fibre Channel provided all other...
things equal?I understand there still might be some variations but just give us some hypothetical example to help us assess one approach against the other.
Great question. It seems as TCP/IP offload engines are becoming more available (and cheaper!), iSCSI is really starting to make headway. When iSCSI first came out the technology was not adopted as the experts first thought it would be. There was just too much investment in plain old Fibre Channel SAN. SAN works great and no one saw a need to invest in a technology in its infancy. No one wanted to be the "first on the block" with iSCSI. Now it seems though, that iSCSI is generating interest for things like consolidated backup of remote offices, wide area clustering, remote tape archive and a cheap way to create an initial SAN using existing IP networks. First of all, getting benefit from iSCSI today will usually require a gateway between an existing Fibre Channel SAN for connectivity to existing Fibre Channel based storage arrays. There are some iSCSI-based arrays out there now though. Second, your network has to be capable enough to handle block-based SCSI traffic over IP. Using a flat 10baseT network is not the way to go. I would recommend at least GigE connections from the servers needing connection to storage resources, connecting through a gateway that is either storage based (like a blade in the array) or externally through a multi-protocol switch. Most of the switch vendors now support multiple protocols. The Fibre Channel protocol was built to be fast. iSCSI was built to be as fast as possible over IP. If you combine normal IP traffic with iSCSI traffic, your users will not be happy campers. iSCSI should be implemented with a dedicated switched IP network or as standalone through iSCSI capable Fibre Channel switches. As for performance, 10Gbit Ethernet is here. If used with iSCSI, it makes for a very fast solution. You will need network adapters that offload the all the TCP/IP stuff to the adapter of course or CPU utilization will be very high on your servers. I have tested iSCSI using Nishan switches for stretching Microsoft clusters over distance. Works like a charm and performance is great over GigE. If you want details of some performance testing, Nishan has a white paper on iSCSI performance using Alacritech's TOE NIC. You can go to the link noted here for the paper. Here is an excerpt from that paper: "Conclusions - Wire-Speed Throughput for iSCSI The test results show that bi-directional throughput with iSCSI can sustain line rates of over 219 megabytes per second in Alacritech's Gigabit Ethernet Server and Storage Accelerator and Nishan's IP Storage switches attached to Hitachi Freedom Storage(tm). This was efficiently done, with less than eight percent CPU utilization on the server equipped with Alacritech's Server and Storage Accelerator. Also demonstrated was the capability of Nishan's IP Storage switches to handle wire-speed conversion between iSCSI and Fibre Channel." As you can see, iSCSI is getting better all the time and now makes for a viable solution for either creating an all iSCSI-based storage network or integrating an existing SAN (thus saving the cost of iSCSI storage) with iSCSI technology.
Editor's note: Do you agree with this expert's response? If you have more to share, post it in one of our .bphAaR2qhqA^0@/searchstorage>discussion forums.
Dig Deeper on SAN technology and arrays
Related Q&A from Christopher Poelker
SAN expert Chris Poelker compares connecting a SAN with wavelength cabling and dark fiber and discusses the pros and cons of each. Continue Reading
SAN expert Chris Poelker discusses how to change the size of a LUN in a Microsoft cluster server environment. Continue Reading
Storage expert Chris Poelker discusses SATA/SCSI compatibility issues in this expert advice article. Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.