Feature

InfiniBand Storage in Limbo

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Tips for unifying storage management."

Download it now to read this article plus other related content.

As a high-speed, low-latency channel interconnect, InfiniBand is beginning to get a lot of attention from the high-performance computing crowd, where they are using the technology to make large clusters out of commodity Intel hardware.

Recent InfiniBand deployments include the Pleiades supercomputing cluster at Pennsylvania State University, in which 160 dual processor SunFire V60x servers have been connected using a 10Gb/s fabric based on the InfinIO 3000 Switch from InfiniCon Systems, King of Prussia, PA. InfiniCon also makes InfiniBand host channel adapters (HCAs) and Fibre Channel (FC) and Ethernet gateway technology.

With bandwidth at 30Gb/s, a single InfiniBand fabric can comfortably support both server and storage I/O. But as it stands, "storage won't connect natively to InfiniBand for a long while," says Chuck Foley, InfiniCon executive vice president.

Executives at Topspin Communications, another InfiniBand startup which recently announced a five-year reseller agreement with IBM, concur with that statement. Native InfiniBand storage is "a 2005, 2006 development," says Stu Aaron, vice president of marketing at TopSpin.

That's a shame, says John Blackman, governing board member for the Storage Networking Industry Association (SNIA) end-user council. Assuming a redundant, dual-ported configuration, Blackman estimates it costs about $7,500 in connectivity costs (HBAs and switch ports) to connect a single server "into legacy infrastructure,"

Requires Free Membership to View

i.e., a Gigabit Ethernet LAN and FC fabric. That same server could connect into an InfiniBand fabric for half that, with a full 10Gb/s of bandwidth.

For the time being, most InfiniBand shops bridge FC in the InfiniBand switch. According to Topspin's Aaron, the company sells optional Gigabit Ethernet or FC gateway technology into most of its accounts. "Most customers use the gateway technology in combination with the switches--it's practically a 1:1 ratio."

But at only 2Gb/s, FC arrays pump out data at a fraction of the rate that InfiniBand can handle. In order to prevent the storage array from turning into a bottleneck, you have to trunk multiple FC ports together, Blackman says.

The need for native InfiniBand storage should get stronger as we start to see denser server blades, predicts Topspin's Aaron. "One of the ways you build denser systems is you pull the I/O slots out," he says. By aggregating server and storage I/O on a single InfiniBand fabric, "customers could see a tremendous cost savings."

But Blackman, for one, isn't holding his breath. "There's a lot of fat [profit] in Fibre Channel," he says. "That's why vendors don't want to move away from it."

This was first published in February 2004

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: