If you're a high-performance storage user, this is your week. Engenio Information Technologies Inc. will unveil...
the industry's first storage array that supports native InfiniBand connectivity, at the Supercomputing show in Seattle.
A technology in search of a market in 2001, InfiniBand peaked before anyone had any use for the high-speed interconnect. But today Wall Street banks, universities and other organizations running large server farms are already using the technology to connect processor nodes together.
"The next logical step is to connect storage over InfiniBand to these server farms," said Steve Gardner, director of product marketing at Engenio. "InfiniBand had a near-death experience as a storage interconnect, but as Linux clusters have taken hold, it has begun to see some light."
The new product is based on Engenio's 4 Gbps Fibre Channel, dual-controller system, the 6998. The controller has a flexible host interface module that now supports InfiniBand. Engenio is using an InfiniBand host channel adaptor from Mellanox Technologies Ltd. that has two front-end InfiniBand ports.
Originally developed by the InfiniBand Trade Association, InfiniBand supports low latency serial input/output between servers and storage devices within a fabric. By design, InfiniBand fabrics have the potential to perform better and to be more interoperable, reliable and scalable than existing technologies, according to advocates of the technology.
Aimed at users with transaction-heavy networks, InfiniBand is already hitting 20 Gbps in new Double Data Rate equipment, leaving Fibre Channel, at 4 Gbps, in the dust. Engenio's system currently supports 10 Gbps data rates. "We're definitely looking at this for storage," said Robert Mowles, storage administrator at JP Morgan Chase & Co. This year the bank replaced a supercomputer and a mixture of high-end servers with a 4000-processor grid of low-end servers connected by InfiniBand.
Engenio's Gardner said Cisco Systems Inc. "cranked up interest in InfiniBand" when it acquired InfiniBand switch maker TopSpin Communications for $250 million in April. Cisco plans to build networks connecting Fibre Channel, Ethernet and InfiniBand switches through the TopSpin Server Fabric Switch InfiniBand-to-Ethernet and InfiniBand-to-Fibre Channel gateways. TopSpin's Vframe virtualization software would then manage this as one single network.
If Cisco's vision sounds a little far fetched, that's because InfiniBand is largely limited to high-performance computing environments today. Most users at the Storage Decisions show in Las Vegas last week were unaware of InfiniBand and some had never even heard of it. It's also worth noting that InfiniBand does not work over distance yet, which could limit its potential as a mainstream technology.
Obsidian Research Corp., based in Canada, is working on fixing this problem. The startup's two-port, 1U box called the Longbow XR, encapsulates InfiniBand traffic in a variety of WAN links. The technology promises to link large clusters across geographic distances over existing networks. It's one to watch, analysts say.
Dig Deeper on SAN technology and arrays