Feature

New life for InfiniBand

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Big 3 backup apps adapt to disk."

Download it now to read this article plus other related content.

InfiniBand 101
To appreciate what InfiniBand brings to enterprise storage, you should know a little bit about the technology. IB was originally developed to replace the aging shared-bus paradigm in servers (for example, PCI slots) with a high-speed, low-latency switched serial I/O interconnection. A fabric is created by installing an IB host channel adapter or host target adapter card into each device, and then interconnecting the devices through an IB switch. Advances in silicon now allow native IB ports to be implemented directly in devices and motherboards, eliminating the need for standalone expansion cards.

    Requires Free Membership to View

Basic InfiniBand channels and data rates

IB bandwidth can potentially dwarf other network technologies like 4Gb/sec FC or even 10Gb Ethernet (10GbE). IB is based on 2.5Gb/sec bi-directional serial channels that can support point-to-point connectivity between devices on the fabric. Channels can also be aggregated for vastly improved throughput. For example, a typical four-channel IB interconnect can offer throughput up to 10Gb/sec. Channels can also be operated at single, double and quadruple data rates for even more throughput (see "Basic InfiniBand channels and data rates," this page). While 10Gb/sec is common today, 20Gb/sec to 30Gb/sec deployments are available, and the IB roadmap suggests speeds to 120Gb/sec over copper in 2007.

But the appeal of IB is more than just speed. These speeds are achieved with very low latencies of just a few microseconds--a fraction of the latencies found in Ethernet networks. IB also supports a "switched" fabric, so many IB devices can share the same I/O, allowing huge numbers of devices to share the fabric. An example of this scalability is the 4,500-node InfiniBand cluster recently deployed by Sandia National Laboratories.

Another key advantage with IB is its ability to handle memory-to-memory data transfers between servers using Remote Direct Memory Access (RDMA). With RDMA, one server can "look into" another server and read just a few bytes (or the entire memory contents) without the direct intervention of either server's CPU. This frees processing power to continue with more compute-intensive tasks. Although RDMA technology will soon be integrated into 10GbE as the Internet Wide Area RDMA Protocol (iWARP), RDMA is native to the current InfiniBand specification. IB storage systems can easily leverage RDMA by treating the storage device as a server--passing data between server memory and a storage system's internal cache.

This was first published in April 2006

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: