Premium Content

Access "InfiniBand Storage in Limbo"

Published: 17 Oct 2012

As a high-speed, low-latency channel interconnect, InfiniBand is beginning to get a lot of attention from the high-performance computing crowd, where they are using the technology to make large clusters out of commodity Intel hardware. Recent InfiniBand deployments include the Pleiades supercomputing cluster at Pennsylvania State University, in which 160 dual processor SunFire V60x servers have been connected using a 10Gb/s fabric based on the InfinIO 3000 Switch from InfiniCon Systems, King of Prussia, PA. InfiniCon also makes InfiniBand host channel adapters (HCAs) and Fibre Channel (FC) and Ethernet gateway technology. With bandwidth at 30Gb/s, a single InfiniBand fabric can comfortably support both server and storage I/O. But as it stands, "storage won't connect natively to InfiniBand for a long while," says Chuck Foley, InfiniCon executive vice president. Executives at Topspin Communications, another InfiniBand startup which recently announced a five-year reseller agreement with IBM, concur with that statement. Native InfiniBand storage is "a 2005, 2006... Access >>>

Access TechTarget
Premium Content for Free.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

What's Inside

Features

More Premium Content Accessible For Free