REGARDLESS OF WHERE you stand on the Fibre Channel vs. IP SANs vs. NAS debates, it's worth noting that the first storage systems with a native 10Gb Ethernet (10GbE) connection are starting to ship (see "Speed wars: Fibre Channel vs. Ethernet."). Marc Staimer, president of Dragon Slayer Consulting, says he personally knows of at least five vendors that plan to ship arrays this year with native 10GbE on them.
BlueArc, San Jose, CA, is the most recent entrant, with its new Titan 2000 series announced earlier this month. Both the new Titan 2100 and 2200 series come with 10Gb/sec Ethernet ports. In the first release, the 10Gb/sec Ethernet pipe is used for intercommunication between clustered Titan gateways, and the company will release "blades throughout the year that support a number of connectivity options," says Jon Affeld, BlueArc's director of product marketing. Those blades will probably support 10Gb Ethernet--or possibly InfiniBand--to the host.
Whatever the case, with a new internal backplane capable of carrying 40Gb/sec of total traffic, the new Titan is "designed to support 10GbE, and any other types of connectivity at that level of bandwidth," says Affeld.
Ciprico, a Plymouth, MN-based storage provider focused on the video editing and content creation market, announced last month that it would integrate 10GbE into its DiMeda 10G NAS system. And iSCSI array vendor DSG Storage, London, Ontario, announced it will embed a 10GbE connection with TCP acceleration.
More vendors will likely unveil 10GbE connected products after the specifications for next-generation Category-6 copper cabling are ratified by the Institute of Electrical and Electronics Engineers this summer. Until then, says John Spiers, founder and CTO at iSCSI vendor LeftHand Networks, you can either use optical cabling--at a $500 per-port price premium--or CX-4 copper cabling, but "that's an interim solution," he notes.
Contrary to popular belief, the real potential for 10GbE isn't for high-performance computing applications (HPC), says Staimer. "Ten gig is an aggregation point; it's not that much of an issue for HPC because most hosts can't drive [10Gb/sec speeds]."
Staimer contends that for NAS and IP SAN environments, increasing the storage array's interface to 10GbE from 1GbE will allow an order of magnitude of more hosts to connect to the array. "Best practices dictate an initiator-to-target ratio of 4:1 on Unix and 7:1 for Windows," he notes. Therefore, an iSCSI SAN array with a single 1Gb/sec connection could support up to seven Windows hosts. But by increasing the target interface by an order of magnitude, the same array could accommodate 70 Windows hosts.
Kianoosh Naghshineh, president and CEO at Chelsio Communications, Sunnyvale, CA, which makes the 10GbE network interface cards and adapters found in Ciprico's and DSG Storage's offerings, describes a scenario where, for example, between 24 to 48 hosts equipped with vanilla 1GbE connections would connect to a GbE switch with a 10GbE uplink into an IP SAN or NAS array. "It's a much stronger value proposition than Fibre Channel because on the motherboard, Gigabit Ethernet is free and you can use your existing GigE switches," explains Naghshineh.
Zophar Sante, VP of marketing development at Sanrad, is even more bullish about the number of hosts a 10GbE IP SAN can support. "If you ask me, with a single 10Gb pipe, you can connect hundreds of servers because most people don't understand just how slow their servers are," says Sante. Furthermore, in most storage systems, especially low-end, SATA-based systems, it's the disk drives rather than the network connections that are the bottleneck, he adds. "Disk is still the slowest thing out there, trust me."