"Customers are now lining up to be the first to have the beta product," according to John Howarth, director of storage products at SGI.
The sales pitch for InfiniBand goes something like this: Rather than build out a separate FC fabric to carry storage traffic, why not piggyback on the existing InfiniBand fabric, which reduces cost and provides more bandwidth?
Furthermore, thanks to InfiniBand's design, there's no need to worry about storage traffic trampling crucial internode cluster communication. By default, "InfiniBand has a protocol that can split traffic on the same pipe," said Jose Reinoso, director of storage and I/O engineering at SGI, which allows you to dedicate how much bandwidth will go to storage traffic.
Today, Linux cluster environments using InfiniBand tend to connect to storage using InfiniBand-to-FC switches like those from Mellanox Technologies Inc., Topspin Communications Inc. (recently acquired by Cisco Systems Inc.) and Voltaire. For example, Dell Inc. lists Topspin as one of its partners for its Scalable Enterprise portfolio, along with EMC for storage and Ibrix Inc. for its cluster file system.
But InfiniBand may not be a good fit for all high-performance environments, said Rick Gillett, chief technology officer at Acopia Networks Inc., which makes a NAS virtualization switch that can carry up to 2 GBps of data.
"The grid applications associated with the MPP [massively parallel processing] are aligned with the advantages of InfiniBand," he said. But as Ethernet moves to 10 Gbps, he believes that "the majority of traffic will be capable of funneling over IP."