This article can also be found in the Premium Editorial Download "Storage magazine: The best high-end storage arrays of 2005."
Download it now to read this article plus other related content.
Why InfiniBand? These days, many high-performance computing environments consist of a cluster of hundreds of commodity 1U rackmount Linux servers that use InfiniBand for interserver communication. Running at 10Gb/sec today and rapidly moving to 20Gb/sec, InfiniBand provides between four and five times the bandwidth of FC (2Gb/sec today and currently moving to 4Gb/sec), little of which is used by interserver communication. The sales pitch for InfiniBand goes something like this: Rather than build out a separate FC fabric to carry storage traffic, why not piggyback on the existing InfiniBand fabric, which reduces cost and provides more bandwidth?
Furthermore, thanks to InfiniBand's design, there's no need to worry about storage traffic trampling crucial internode cluster communication. By default, "InfiniBand has a protocol that can split traffic on the same pipe," says Jose Reinoso, director of storage and I/O engineering at SGI, which allows you to dedicate how much bandwidth will go to storage traffic.
Today, Linux cluster environments using
But InfiniBand may not be a good fit for all high-performance environments, says Rick Gillett, CTO at Acopia Networks, which makes a NAS virtualization switch that can carry up to 2GB/sec of data.
"The grid applications associated with the MPP [massively parallel processing] are aligned with the advantages of InfiniBand," he says. But as Ethernet moves to 10Gb/sec, he believes that "the majority of traffic will be capable of funneling over IP."
This was first published in August 2005