Fibre Channel (FC) interconnect technology has been the undisputed SAN shared storage champ. This comes primarily from an excellent application server to target storage array ratio. The rule of thumb has been four Unix or Linux servers per target storage port of the same speed 4:1. The rule of thumb for Windows servers is 7:1. When the storage target ports are twice as fast at 4 Gbit those ratios double. This allows an active-active modular dual storage controller (4-ports FC per storage controller) to support 64 Unix/Linux servers or 112 Windows servers. These are impressive shared storage numbers.
Application servers running iSCSI over 1 Gbit Ethernet have historically not been able to match FC's heady numbers because there is no difference between the speed of the application server ports and those on the target storage array. That same active-active modular storage controller with eight 1 Gbit iSCSI over Ethernet ports would be capable of supporting 32 Unix/Linux servers or 56 Windows servers. That's 50% less than the equivalent storage controllers with 4 Gbit FC. The lower TCO of iSCSI interconnect is not enough to overcome much reduced shared storage. This is one of the reasons why the latest SAN conventional wisdom is that iSCSI over Ethernet will supplement and not replace FC SANs.
Like most conventional wisdom, it's usually wrong. The supporting rationale is subject to change. The pace of technology change is faster than ever and accelerating. The technology change in this case is iSCSI over 10 Gbit copper Ethernet.
Ten Gbit Ethernet has been around for a couple of years now. Sales have been slowly accelerating. The factors hindering growth have boiled down to high cost: 10 Gbit optical transceivers are expensive; 10 Gbit optical cabling is expensive; 10 Gbit optical ports on Ethernet switches are expensive; 10 Gbit optics are not backwards compatible with 10/100/1000 Ethernet because of differences in the optical coding (64 b/66 b vs. 8 b/10 b.) This makes 10 Gbit optical a "rip-out-and-replace" technology, which is incredibly expensive. The common theme here is 10 Gbit optical is expensive.
Engineers are clever people. They focused on 10 Gbit copper Ethernet to solve these cost issues. Their initial success with CX-4 (15 meters) will be eclipsed next year with Cat 6 (70 meters) and Cat 7 (100 meters). Ten Gbit copper Ethernet is a fraction of the cost of optical 10 Gbit Ethernet. It even costs less than 4 Gbit FC. IDC predicts approximately 200,000 10 Gbit Ethernet interfaces will be installed by the end of 2006. The bulk will most likely be copper.
Ten Gbit copper Ethernet significantly alters the landscape for iSCSI. The shared storage ratio goes up an order of magnitude. If an active-active modular storage controller has two 10 Gbit copper Ethernet iSCSI ports per controller, it would be able to support as many as 160 Unix/Linux application servers running 1 Gbit iSCSI over Ethernet and 280 Windows servers. That's potentially two and a half times more servers per target storage than with 4 Gbit FC. Standard Ethernet switches with low cost 10 Gbit and 1 Gbit copper ports provide the server fan-out.
If this scenario plays out, the iSCSI SAN has both the advantage in lower TCO and a higher shared storage ratio. Could iSCSI over 10 Gbit copper Ethernet mark the beginning of the end of FC? The possibility is there. The answer is in the foreseeable future, but there are still bumps in the road. The IEEE must complete the Cat6 and Cat7 10 Gbit copper standard. Storage array vendors must design more processing power and 10 Gbit iSCSI over copper Ethernet ports into their arrays to handle the significant increase in servers supported. And 10 Gbit iSCSI copper Ethernet components must be available from multiple suppliers.
Some storage vendors are already planning to come out with modular storage arrays using iSCSI over 10 Gbit copper Ethernet ports by the end of 2006. Expect more to follow by 2007. By that point in time, the answers should be clear.
Let me know what you think.