After five years of working with Fibre Channel storage area networks (SANs), I have personally been quite confused about iSCSI, about exactly what it really does and more importantly how it works and how we can actually use it to solve real customer problems. So here, after a busy few months talking to a lot of people on the subject, are a few of my own views.
iSCSI is sending SCSI commands in IP packets. To be more specific, iSCSI is designed to be a protocol for a storage initiator (usually a server) to send SCSI commands to a storage target (usually a tape or disk) over IP.
To define other protocols, FCIP is about taking Fibre Channel frames and sending them over IP, basically extending a Fibre Channel connection -- not really anything to do with SCSI at all. iFCP on the other hand is about mapping FCP (serial SCSI over Fibre Channel) into and out of IP. In other words, it is about providing a routing protocol between Fibre Channel fabrics allowing connectivity over IP.
To put it another way, iSCSI is the server to storage SCSI over IP protocol. The other protocols are all about Fibre Channel to Fibre Channel with various degrees of intelligence.
So how do iSCSI devices find each other?
In normal SCSI, and in Fibre Channel private loops, the discovery process is fairly primitive. For Fibre Channel fabrics there is a necessary service called the Simple Name Server -- or just Name Server -- when you get to hundreds or thousands of devices. But in IP, we could have many millions of devices in theory.
There are two mechanisms currently being used in the IP world for iSCSI device discovery. The first is SLP - service locator protocol. This is a service discovery protocol that has been around in the IP world for a little while. More recently, however, vendors including Microsoft have put their backing behind a new protocol -- Internet Simple Name Server. Simply put, the principles of the Fibre Channel SNS have been taken and scaled up to the point that they can cope with IP size networks while still being specifically designed for storage capabilities unlike SLP.
How can I use iSCSI?
There are three main ways to use iSCSI:
1. You could have a natively-capable iSCSI server talking to natively-capable iSCSI storage.
2. You could have a natively-capable iSCSI server talking via an iSCSI-to-Fibre Channel router to Fibre Channel connected storage.
3. You could have a Fibre Channel server talking through a Fibre-Channel-to-iSCSI router to iSCSI storage.
Of course, as in some cases, Fibre Channel storage talks to other Fibre Channel storage (for example, for disk replication or for serverless backup) you could have iSCSI storage devices talking to each other as well.
So, which is most likely and/or most sensible? To answer that, I think we have to back off and remember that storage networking is about flexibility, about using your stuff in different ways. Today, iSCSI on servers is fairly new, though easy to get with Microsoft releasing support for both Windows Server 2000 and 2003.
For this reason, one way I expect to see iSCSI used is through iSCSI servers connecting to existing Fibre Channel storage through an iSCSI to Fibre Channel router, most likely via a Fibre Channel SAN. It means the same ports of the same storage arrays can provide storage services to both Fibre Channel and iSCSI servers. Thus it allows you all to get more value out of the SAN and the Fibre Channel storage that you already have. You can do this today -- the products are available.
I also expect a similar phenomenon to hit the NAS market, in fact it has already hit. As a NAS device is already connecting disk to an IP-based network, sharing the services over NFS and/or CIFS, it is in principle an easy thing for the NAS to also do block I/O over iSCSI through the same ports. Again, allowing you to reuse existing storage solutions in a new way.
There are some other interesting and novel solutions bouncing around for fully native distributed iSCSI-only storage, and these may work well in green field sites where there has been no storage consolidation already, but they are one solution products.
Who will use iSCSI?
As someone who has worked in Fibre Channel for some years, I am afraid I have to point out to the Fibre Channel world that iSCSI can run at wire speed and certainly can run as fast as any normal server running normal applications. To the IP community, I would like to point out there is a lot of Fibre Channel out there -- particular comparing the numbers with the number of 1GB network ports rather than any network ports. To the Fibre Channel community, I must point out that while a lot of storage and even a lot of high-end servers are connected to Fibre Channel, there are still quite a few Unix servers not connected and the vast majority of the Intel server community is not Fibre Channel connected.
So, iSCSI can work for everyone, but the biggest potential markets are probably the Intel servers, and the rack dense and bladed servers (Intel or otherwise). In addition, it will sometimes be used for stranded high performance servers, for remote offices to utilize central data centre SANs, and for other cases where Fibre Channel has yet to reach -- after all, there are many servers and storage devices still to be networked.
NICs, TOEs and HBAs: When to use each?
Finally, how does my server connect, there are three approaches:
1. A standard NIC with an iSCSI driver
2. A TOE (TCP Offload Engine) NIC with an iSCSI driver
3. An HBA (Host Bus Adapter) designed for iSCSI from the traditional Fibre Channel HBA vendors
When do I use which? This is an interesting question. The initial assumption is that the greater the performance you require, the more likely you are to not use a standard NIC but instead use a TOE card or an HBA -- which will of course cost more money. There is another school of thought though that some high-performance servers have plenty of spare clock cycles, and so why not save money and use a cheep network card.
A key point that this demonstrates is that unlike Fibre Channel HBAs, iSCSI pricing scales from lower performance (free) to high performance (accelerators) and so can be sized according to application requirements. Also the fan-out (or oversubscription) can also leverage more economical Ethernet ports (both fast and GE) instead of dedicated FC switch ports further reducing cost. With iSCSI TOE cards looking like they may be $300 list or less, the per-host attachment cost is significantly lower than FC, even for TOE-enabled performance.
Since FC is at 2Gbps, it is still the preferred attachment for highest performance servers (there being no 2G ethernet), though frankly there are very few server side devices using anything like this bandwidth even on fibre channel. On the storage side of course, it is far more likely that utilisation is approaching 2Gbps -- at least until we start to see 10Gb FC or even 10Gb Ethernet/iSCSI ports. Certainly iSCSI opens the door to hundreds or thousands of servers particularly Wintel systems, many of which may be less demanding, and the vast majority of which have yet to benefit from networked storage.
Only time will tell exactly what will happen, though I for one am expecting that this will be a very interesting year for storage networking and for iSCSI.
About the author: Simon Gordon is a senior solution architect for McDATA based in the UK.
Simon has been working as a European expert in storage networking technology for more than 5 years.
He specializes in distance solutions and business continuity. Simon has been working in the IT
industry for more than 20 years in a variety or technologies and business sectors including
software development, systems integration, Unix and open systems, Microsoft infrastructure design
as well as storage networking. He is also a contributor to and presenter for the SNIA IP-Storage
Forum in Europe.
This was first published in June 2003