News Stay informed about the latest enterprise technology news and product updates.

InfiniBand storage protocol heats up

While InfiniBand commercial storage users remain a minority, data growth and new high-performance media, like SSDs, are spawning new products and competition in the market.

Several InfiniBand developments have renewed the debate over whether the high-speed, low-latency protocol will gain steam for storage connectivity.

Much of this week's news revolved around Quadruple Data Rate (QDR), which works at 40 Gbps, compared to Fibre Channel at 8 Gbps. New InfiniBand products rolled out include a storage array, switches and host channel adapters (HCA).

LSI rolled out the Engenio 7900 disk array Tuesday, an update to its native InfiniBand array capable of supporting Double Data Rate (DDR) 20 Gbps InfiniBand. The Engenio 7900 scales up to 256 Fibre Channel or SATA drives at initial release and subsequent releases will scale up to 480 drives and add support for 8 Gbps Fibre Channel and 40 Gbps InfiniBand.

Voltaire said it will ship a 40 Gbit switch with ASIC partner Mellanox later this year. Voltaire also said its 20 Gbps Grid Director switch will support the Engenio 7900, as well as Data Direct Networks' S2A9900, which also offers native InfiniBand connectivity.

More on storage networks
InfiniBand still a long shot for storage

Cisco buys Nuova, launches FCoE-compliant switch

Director or not, Brocade Backbone surfaces
On the other side of the fabric, InfiniBand chipmaker Mellanox has new competition in PCIe systems after the release of a new HCA from QLogic, which is based on the IP it acquired with PathScale in early 2006.

Frank Berry, vice president of corporate marketing, said QLogic had previously used the PathScale IP in an HCA that relied on AMD HyperTransport bus before the market switched to PCIe. QLogic had since used Mellanox chips in its PCIe HCAs, but it will use its own proprietary ASICs from now on. "In the HPC world, users building InfiniBand server clusters care about message rates and latency," Berry said. With the new ASIC, QLogic claims 26 million message-per-second performance.

QLogic's new HCAs are intended for server clusters rather than storage, though storage systems also need HCAs to connect to hosts via InfiniBand, Berry said. "People will connect storage to it, but it's not optimized for InfiniBand storage performance." InfiniBand excels at moving large chunks of data over the wire rather than breaking data into Ethernet packets. An HCA designed for high-performance computing (HPC) server clusters is more optimized for large numbers of relatively small messages passing between servers on the cluster.

InfiniBand, an evolving market

InfiniBand is deployed mostly in servers and HPC shops rather than mainstream commercial storage. But it has its fans in the storage world, mainly because applications once considered the sole domain of HPC labs are moving into commercial use. These include seismic processing in the oil and gas industry, real-time financial trading databases and video processing for HD files.

Pacific Title and Art Studio is already a customer of Data Direct Networks' S2A arrays and InfiniBand networks for communication between hosts. Chief technology officer Andy Tran said his company will use InfiniBand storage the next time it has to add a new controller. "Right now we're using all eight 4 Gbit Fibre Channel ports on the [Data Direct] array," he said. The array is barely keeping up with InfiniBand on the other side of the network and a move to InfiniBand storage would allow Pacific Title to consolidate switches. "If we move to InfiniBand, we could use just one port rather than eight," Tran said.

Asaf Somekh, vice president of strategic alliances at Voltaire, said solid-state drives (SSD) are generating a lot of interest among storage users and that could boost adoption for InfiniBand. "As solid-state disk RAID finally becomes mainstream, it opens one of the historical bottlenecks in the InfiniBand network, which is spinning disk," he said. "Freeing up that storage element limitation may make a better case for a fatter pipe going to the storage system."

Enterprise Strategy Group analyst Bob Laliberte said the SSD market "is still very immature but growing rapidly. I guess the question is, will everyone who has InfiniBand move to SSDs, which is a limited market, or will everyone who adopts SSDs implement InfiniBand?"

It's still an open question how InfiniBand will play out, according to analyst Arun Taneja of the Taneja Group. "There's a class of problems that exist today that can only be solved by InfiniBand," he said. With 10 Gbit Ethernet yet to hit the mainstream and data center Ethernet still being developed, Tanjea said. "Unless miracles happen with 8 Gbit Fibre Channel or carrier-class Ethernet, there will always be a play for InfiniBand in the foreseeable future, and the longer it lives, the harder it will be for competitors to kill it."

Dig Deeper on SAN technology and arrays

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.