InfiniBand networking for storage and converged data centers

InfiniBand has expanded from its HPC roots to take on new, more mainstream use cases in the data center.

InfiniBand networking has expanded from its high-performance computing roots to take on emerging, more mainstream use cases in today's enterprise data center.

InfiniBand networking has played a key role in high-speed, switched fabric technology. InfiniBand has made steady progress in the high-performance computing (HPC) world by stitching large clusters of compute nodes closer together to greatly accelerate system performance. Despite InfiniBand's higher bandwidth, lower latency and overall cost efficiency, it has always faced an uphill battle for broader adoption. But today, as my colleague Mike Matchett recently observed in a Taneja Group report, significant use cases beyond HPC are emerging that are making InfiniBand an attractive, if not inevitable, choice for the core of the enterprise data center.

InfiniBand is a network interconnection protocol delivered via a flat, switched fabric architecture under centralized management. It differs from Ethernet in that it was designed from the start for extremely low-latency, high-throughput and lossless delivery. InfiniBand is ideal for data center consolidation architectures where the goal is to reduce the total amount of physical compute assets deployed and bring the rest into a homogeneous environment. Because it can support multiple, higher-level protocols simultaneously over a single cable while providing the high bandwidth required for dense infrastructures, it becomes a "virtualized" networking offering that, in turn, naturally serves as a high-performing and flexible interconnect for any highly virtualized computing environment.

As it continues to surpass Ethernet in bandwidth, latency and ultimately total cost, InfiniBand has been gaining adoption slowly and steadily as a data center fabric. As we talk to end users, it's clear that a few dominant new use cases for InfiniBand are rising to the top, such as:

  • Big data and big database offerings
  • Virtualized cloud infrastructures
  • Web-scale applications
  • Scale-out shared storage

Use case 1: Big data gets a big boost

Vendor-specific business intelligence and scalable big data offerings like Oracle's Exadata have employed InfiniBand internally on the back end for years. For scale-out applications like these that require high computational density and massive internal data flows, InfiniBand is an ideal fabric solution.

Users deploying open-source Hadoop and Hadoop-like offerings should note how much more powerful their "white box" big data cluster could be if it were interconnected with InfiniBand. Since InfiniBand used within a Hadoop cluster can double the analytical throughput compared to 10 Gbps Ethernet (10 GbE), the savings accrued from needing fewer nodes and/or doing more analysis faster will often more than cover the incremental fabric cost.

As long as data continues to grow in both volume and variety, high-performance data movement will remain a major data center networking challenge, and InfiniBand's high throughput and low latency make it a compelling alternative.

Use case 2: Virtual I/O for the virtualized data center

To meet the availability and performance requirements of mission-critical applications in a virtual environment, IT must virtualize the whole I/O data path, including shared storage and connecting networks. The virtual I/O path, in turn, must support multiple protocols and dynamic reconfiguration.

To enable the mobility of virtual machines (VMs) running mission-critical applications, a VM must be able to seamlessly and quickly take its entire network and storage "perspective" with it wherever it goes. This means the physical fabric supporting it must be equally connected and capable at every host. But since hosts often have a variety of different physical adapters and physical network connections, "liquefying" VM movement is a challenge. A converged, flat network fabric like InfiniBand presents a single, huge "fat pipe" that can be logically carved out and dynamically provisioned as needed, which makes it ideal for dense, highly mobile virtual implementations.

Use case 3: Scale-out Web needs tight interconnection

Web-sized applications require an infrastructure that not only supports virtualized, cloud-like compute resources, but additional mobility and agility to reconfigure dynamically, no matter what the current data flow or interconnect requirement is. Here, InfiniBand's flat address space is a boon to service providers and large Web-based businesses alike.

For those applications where data flows are small and numerous (e.g., random storage writes, memcached/RDMA, message queues), cutting network latency in half can improve application-level performance and throughput by the same factor or more, seriously reducing requirements for infrastructure and yielding significant cost savings.

Use case 4: Sharing and convergence drive density

As data centers become more dense, users can squeeze more juice out of their assets by deploying front-side interconnects that match the back-side capability. In other words, if InfiniBand is the choice for high-performance, scale-out storage subsystems internally, it should be considered for the front-end connectivity to that storage, too.

The recent movement toward expanding server-side storage will require an InfiniBand-like interconnect fabric if it will ever be more than advanced cache; it will either have to share data directly with other servers or tightly integrate with external shared-storage devices. The net effect of converging storage and servers basically also leads to converging front-side storage I/O with back-side storage I/O, and InfiniBand networking will play a prominent role in both places.

InfiniBand used to apply only to HPC and a few other specialized needs. But the demands of virtualization, cloud and big data are increasing the need for InfiniBand as a fabric solution.

About the author: 
Jeff Byrne is a senior analyst and consultant at Taneja Group.

Dig Deeper on SAN technology and arrays