New life for InfiniBand

InfiniBand storage is finally emerging, but despite its cost, speed and scalability advantages over Fibre Channel, acceptance has been slow in enterprise data centers. But clustered, high-performance computing and demanding applications have helped renew interest in InfiniBand-based storage networks.

This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Storage magazine: Big 3 backup apps adapt to disk:

InfiniBand storage is finally emerging; but despite the cost, speed and scalability advantages over Fibre Channel, its acceptance has been slow in enterprise data centers.

If you work with high-performance computing (HPC), you probably interconnect clustered nodes through an InfiniBand (IB) fabric. For network and storage administrators not up to speed on the technology, it's time to put IB on your radar.

Scientific and academic researchers have known about IB for years. Its high-speed, low-latency architecture is an ideal interconnect for thousands of cooperative commodity servers running a single file system under an operating system like Linux. IB deployments are finally moving beyond the ivy walls of academia and popping up in corporate data centers to solve specific computing and storage problems that require high performance at a relatively low cost.

While IB has come into its own as a computing architecture, the storage implications of HPC are only now being addressed. The torrents of data processed and exchanged among IB nodes must ultimately be stored. Blade servers and other high-density, low-profile computers often deployed in clusters are notoriously light on internal storage. This forces IB administrators to interface the IB fabric to a conventional Fibre Channel (FC) SAN using an IB-to-FC gateway. But native IB storage fabrics dedicated to a performance-demanding app are beginning to appear in corporate data centers.

No one expects IB storage to replace conventional FC and Ethernet technologies (at least into the foreseeable future), but as IB-clustered servers are used to solve corporate computing problems, IB storage deployments will inevitably grow. Storage managers should understand the benefits of IB and learn how the addition of IB storage will impact operations.

InfiniBand 101
To appreciate what InfiniBand brings to enterprise storage, you should know a little bit about the technology. IB was originally developed to replace the aging shared-bus paradigm in servers (for example, PCI slots) with a high-speed, low-latency switched serial I/O interconnection. A fabric is created by installing an IB host channel adapter or host target adapter card into each device, and then interconnecting the devices through an IB switch. Advances in silicon now allow native IB ports to be implemented directly in devices and motherboards, eliminating the need for standalone expansion cards.

Basic InfiniBand channels and data rates

IB bandwidth can potentially dwarf other network technologies like 4Gb/sec FC or even 10Gb Ethernet (10GbE). IB is based on 2.5Gb/sec bi-directional serial channels that can support point-to-point connectivity between devices on the fabric. Channels can also be aggregated for vastly improved throughput. For example, a typical four-channel IB interconnect can offer throughput up to 10Gb/sec. Channels can also be operated at single, double and quadruple data rates for even more throughput (see "Basic InfiniBand channels and data rates," this page). While 10Gb/sec is common today, 20Gb/sec to 30Gb/sec deployments are available, and the IB roadmap suggests speeds to 120Gb/sec over copper in 2007.

But the appeal of IB is more than just speed. These speeds are achieved with very low latencies of just a few microseconds--a fraction of the latencies found in Ethernet networks. IB also supports a "switched" fabric, so many IB devices can share the same I/O, allowing huge numbers of devices to share the fabric. An example of this scalability is the 4,500-node InfiniBand cluster recently deployed by Sandia National Laboratories.

Another key advantage with IB is its ability to handle memory-to-memory data transfers between servers using Remote Direct Memory Access (RDMA). With RDMA, one server can "look into" another server and read just a few bytes (or the entire memory contents) without the direct intervention of either server's CPU. This frees processing power to continue with more compute-intensive tasks. Although RDMA technology will soon be integrated into 10GbE as the Internet Wide Area RDMA Protocol (iWARP), RDMA is native to the current InfiniBand specification. IB storage systems can easily leverage RDMA by treating the storage device as a server--passing data between server memory and a storage system's internal cache.

So, if IB brings major benefits to storage and servers, why hasn't it been more widely deployed outside HPC environments? InfiniBand's wider acceptance has been stymied by political, technical and business issues.

The first strike against IB came early when the technology was vastly overhyped and didn't have the working products to back it up and provide much-needed credibility. Early interest quickly fled to other technologies like FC. The next hit came when technological leaders like Intel Corp. and Microsoft Corp. abandoned their active support for IB, choosing to wait for a market to develop. "When two major vendors backed away from InfiniBand, it instantly became a niche technology, and then it looked like it was going to die," says Arun Taneja, founder, president and consulting analyst at the Taneja Group, Hopkinton, MA.

IB has also languished from a lack of software support. Operating systems needed drivers and applications had to be tweaked for parallel processing. In most cases, this meant getting drivers directly from IB device vendors, dealing with interoperability issues and then tackling any application changes in-house.

But many of the early knocks against IB are beginning to be resolved: The Linux 2.6.11 kernel includes native support for IB devices, and the Open InfiniBand Alliance is working to develop a standard driver stack for Windows Compute Cluster Server 2003. "It's not 100% there yet with Microsoft, but it's on the right track," says Thad Omura, member of the InfiniBand Trade Association (IBTA) and vice president of product marketing at Mellanox Technologies Inc. in Santa Clara, CA. Application developers are also recognizing the growing appeal of IB, and are updating their products accordingly (such as Oracle's recent 10g release).

The lack of an "I've got to have it" business need has also slowed acceptance of IB storage in the corporate data center. A significant business need (such as graphics rendering) drives the implementation of an IB cluster which, in turn, provides the incentive to consider IB storage.

"There's no compelling reason that it's going to become the mainstream technology, other than in high-performance compute clustering," says Marc Staimer, president of Dragon Slayer Consulting, Beaverton, OR. "I don't see it replacing Fibre Channel or Ethernet."

A sampler of InfiniBand storage
DataDirect Networks Inc. 2A9500 modular RAID storage networking system. Each module in this expandable system supports up to 1.3TB of capacity across dual-port 10Gb/sec InfiniBand host channels.

Isilon Systems Inc. IQ line. Isilon's storage products offer InfiniBand intracluster communication with a maximum capacity of 250TB across 42 nodes using a single unified file system.

Silicon Graphics Inc. InfiniteStorage TP9700 system. This storage platform with its four 10Gb/sec InfiniBand ports supports an aggregate bandwidth of 1,600MB/sec for up to 90TB across as many as 224 Fibre Channel or serial ATA disks.

Terrascale Technologies Inc. Storage Bricks. Storage units offer 650GB to 3.5TB of formatted usable capacity for Linux clusters across native InfiniBand connections, and can be combined and treated as a single virtual storage unit up to 18 exabytes.

Xiranet Communications GmbH Xas500ib InfiniBand Storage system. Expandable rack-mounted storage modules each provide up to 7.5TB of storage on SATA drives, and support a range of RAID levels for data protection.

Yotta Yotta Inc. NetStorager GSX 2400. Rack-mounted storage subsystems designed for data pooling/replication between remote geographic sites.

The benefits of InfiniBand storage
Although clustered computing is still viewed as an IT niche, several vendors are betting that the niche will continue to grow. Consequently, vendors like DataDirect Networks Inc., Engenio Information Technologies Inc., Isilon Systems Inc., Silicon Graphics Inc. (SGI), Terrascale Technologies Inc. and others are actively developing storage systems for InfiniBand clusters (see "A sampler of InfiniBand storage"). Their offerings run the gamut from large single-box systems to smaller, more modular platforms that can be easily upgraded to follow cluster growth. Aside from the obvious user benefits of high bandwidth and low latency, IB storage vendors generally tout the key advantages of simplicity, cost and scalability.

Simplicity means fewer ports and less cabling. A typical network may need multiple infrastructures for communication and storage, but IB can handle all of the necessary network traffic over the same fabric. In addition, it would take five aggregated 2Gb/sec FC SAN ports to match the performance of a single 10Gb/sec IB port. Storage across IB means fewer ports and corresponding cables. This reduces cost, and eases the commitment of labor needed to configure and maintain the simpler IB infrastructure. Native IB storage also eliminates the cost and bottleneck of an IB-to-FC or IB-to-Ethernet gateway.

The ability to scale is another important concern in clustered computing. Scaling a traditional FC SAN for higher capacity (while maintaining performance) is problematic at best. IB storage, on the other hand, is almost "plug and play" in its ability to expand storage nodes and capacity across a single file system.

The "channel" nature of IB is also a powerful benefit; for example, rather than aggregating four 2.5Gb/sec IB channels into one 10Gb/sec stream, a user can opt to assign channels to specific tasks--ensuring the performance of mission-critical apps. Instead of devoting an entire IB stream for backup, one or more channels can be allocated for very specific backup tasks, guaranteeing a minimum level of performance for each task.

Early IB adopters
Is IB hype still outpacing IB products? It's too early to say for sure. IB storage is relatively new and many products are still being evaluated by users. For example, Sandia National Laboratories made news in 2005 when it implemented a 4,500-node InfiniBand cluster for Department of Energy research, but it's only now evaluating IB storage devices. "A lot of the IB storage is either just hitting the market or in the beta-testing phase for the vendors," says Matt Leininger, principal member of the technical staff at Sandia National Laboratories. He's particularly interested in leveraging IB storage to reduce the costs and complexities of FC storage. "At least having the option of getting rid of some of those [IB-to-FC] routers is certainly intriguing," he says.

Still, early IB storage adopters are generally pleased and encouraged with the results they're seeing. DNA Productions Inc. in Irving, TX, runs a 1,000-processor rendering farm that drives its computer animation business. DNA traditionally relied on NAS boxes for storage, but quick NAS obsolescence and network performance concerns drove the company to adopt a new storage infrastructure based on 42 Isilon IQ 1920 storage devices--today providing more than 80TB to hundreds of artists and animators.

"I have to get that render farm flying, and I need the render farm to hit that storage device as fast as it can," says Brian Chacon, part of the management team at DNA Productions, who reports throughputs up to 2.3Gb/sec between the render farm and storage system. Not only is the data rate appealing, but Chacon values the modularity and scalability found in Isilon's offering. Another plus: The system required little training.

Simplicity is also a notable benefit for the U.S. Naval Research Laboratory (NRL). Traditionally an FC SAN user, the NRL is systematically moving its 100TB to 150TB of research data to IB storage on SGI and other platforms. "About half of that amount is now on InfiniBand storage using serial ATA drives," says Dr. Hank Dardy, chief scientist for advanced computing at the NRL's Center for Computational Science. Dardy sees IB as a better, faster and cheaper technology for storage and other networking tasks. It took approximately three weeks to integrate IB storage devices into the NRL environment--a relatively smooth process from Dardy's perspective. "It certainly wasn't out of the norm," he says. Other than a one-day training session at a vendor facility, the system required no advanced training.

Buyers beware
The early IB storage news may be encouraging, but analysts warn users to be skeptical of some vendor claims. Interoperability between IB hardware and software can still be problematic. "There's still some settling of the specifications," says Brian Garrett, technical director at Enterprise Strategy Group's Labs in Milford, MA. "As with early Fibre Channel solutions, you want to be talking to the storage system vendor when picking products to ensure that it all works together," he says.

Cost benefits, as with all total cost of ownership claims, should be evaluated carefully. While IB "silicon" may indeed be cost competitive--lowering the overall price per port--IB storage systems still require disks and other components, so the cost of IB storage systems may not be much lower than FC on a system basis. "Most of the cost in that [storage] box is the drives, and the drives are common to both [IB and FC] sides," says Taneja of the Taneja Group.

Finally, be cautious of scalability claims. IB systems may offer easier interconnects for storage platforms, but maintaining storage performance as capacity grows may present some limitations. A well-balanced IB storage array should deliver good performance up to the amount of disks it can hold. "But then, where's my scalability beyond that?" says Taneja. "I have to add another box, which is exactly the same thing I do with Fibre Channel."

What's ahead?
Make no mistake, InfiniBand is here and data centers are deploying the technology, albeit slowly, for specialized apps. And IB will be pulled into enterprise data centers by their gradual adoption of clustered computing. But just how much and how fast IB (and IB storage) will grow is a matter of debate. Standards are still not firm; drivers are still pending for Windows and vendors fear that another six to nine months of delay may open the door for 10GbE. "If the software stability issues aren't addressed, then [10] Gigabit Ethernet will become more competitive despite the pricing disadvantage," says Gautham Sastri, founder, president and CEO at Terrascale Technologies Inc. in Montreal.

The bottom line: IB won't displace Ethernet networks and FC SAN installations in the enterprise. Gateways will connect islands of IB with the rest of the corporate network. And while gateway bottlenecks may be unavoidable, it will allow legacy storage to be shared within the IB domain and let the network access native IB storage.

Even Taneja, arguably one of the most vocal proponents of IB storage, acknowledges that there will be no sudden, complete shift in enterprise network storage. Says Taneja: "It [IB storage] doesn't have the same urgency as the drive to bring InfiniBand to clustered computing."

This was first published in April 2006

Dig deeper on Fibre Channel (FC) SAN

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close