Storage networks get virtual

The benefits of virtualization can now be applied to storage networks. Read how new products make it possible to pool and share storage networking resources.

This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Storage magazine: 2009 Storage Products of the Year:
Servers are virtualized, storage is virtualized, but what about your storage network? New products now make it possible to pool and share networking resources.

Storage network virtualization, also known as I/O virtualization (IOV) or I/O consolidation, comprises an emerging family of technologies that extend the concept of virtualization to the major types of input and output (I/O) handled by today's servers.

In recent years, data centers have been deploying server and storage virtualization technologies to more efficiently use underutilized computing assets and to create more flexible infrastructures. In decoupling the logical function from the physical hardware, virtualization allows hardware to be pooled and shared, thus improving utilization. Once in place, virtualization makes new server or storage deployments much quicker and easier, while making it easier to effect changes to the existing infrastructure. For example, it's much easier to deploy new virtual servers than physical servers. And when storage systems are virtualized, many of the data migration issues related to new array deployments can be avoided by adding the new capacity to the existing pool of storage resources.

Virtualization has long been applied to a number of different computing technologies. Although storage virtualization has its roots in the mainframe world, it's only now beginning to gain wider adoption. Server virtualization, on the other hand, has become the poster child of virtualization in the last few years. A handful of organizations are now applying similar virtualization techniques to the "connective tissue" that links servers to storage in enterprise environments.

But I/O virtualization isn't exactly a brand new idea either, with virtualization concepts already being used for some network I/O technologies today. For example, a virtual local-area network (VLAN) separates the logical and physical aspects of a network so that one physical network appears as and can be managed as several smaller logical networks. Network interface card (NIC) teaming combines two or more network adapters and makes them appear to function as a single adapter with increased bandwidth. In both cases, logic in the hardware and management software layers allows the decoupling of the logical functions from the physical hardware, making it possible to carve up the hardware and share it as separate units, or to combine it to present it as one larger unit.

PCI Express and I/O virtualization

A server in an enterprise data center typically needs access to a LAN, a storage-area network (SAN) and local direct-attached storage (DAS). Some servers also need access to high-end graphics processing. A server's access to these resources usually comes by way of an internal system bus. In a newer multicore physical server with a high-speed PCI Express (PCIe) bus, all of these I/O "pipes" occasionally hit peak bandwidth, but rarely simultaneously or on a sustained basis (see "PCI Express boosts I/O virtualization," below). With many virtualized servers running on a single physical server, these I/O pipes are busier, but aren't likely to be running at full bandwidth simultaneously or on a sustained basis.

PCI Express boosts I/O virtualization

The PCI-SIG, the special interest group responsible for PCI Express (PCIe) industry-standard I/O technology, announced the completion of the PCI-SIG I/O virtualization (IOV) suite of specifications in June 2008. These specifications enable virtualization solutions to tackle the most I/O-intensive workloads by removing performance bottlenecks in both software and hardware virtualization components. The IOV suite provides a set of technologies that can be used by providers of processors, chipsets and I/O fabrics, and has implications for hypervisors and operating systems. These technologies provide:

  • Address Translation Services (ATS) so that I/O devices can take on various personalities
  • Single-Root IOV (SR-IOV) for native I/O virtualization in existing PCI Express topologies within a single server
  • Multi-Root IOV (MR-IOV) for native I/O virtualization in new PCI Express topologies where multiple servers share a PCIe fabric

This set of specifications promises to trigger new virtualization solutions that provide improved performance, lower power consumption and new terminology that will change the way we view I/O to and from a server.

What if, instead of installing separate network and storage adapters in every server, the PCIe bus adapters could be virtualized and shared across multiple servers? Consider the potential cost and power savings for NICs, host bus adapters (HBAs) and SAS/SATA disk controller cards that could be shared across a rack of servers. A rack full of servers could have only one cable for each server connecting it to a virtualized set of I/O adapters at the top of the rack. Then that top-of-rack unit could dynamically direct all LAN, SAN and DAS traffic to the appropriate location as needed, such as end-of-row switches for example, leaving the servers to focus on computing. This "rack-area network" (RAN) concept can allow an entire rack of servers to have some of the same benefits as blade servers, but without the limitations of a blade server chassis. The consolidation realized in this scenario would also mean that the size of the rack servers could be reduced to 1 rack unit (1U) or even one-half of a rack unit (1/2 U).

Consider the movement of a virtual machine (VM) from one physical server to another physical server. Typically, this requires a SAN, because SANs are separate from the physical server and can be accessed from any server, assuming all of the security, zoning and logical unit numbering (LUN) masking issues have been addressed. What if movement of virtual machines could be made to work with any storage, rather than requiring a SAN? I/O virtualization-capable adapters would run some of the hypervisor functions in hardware, offloading the host CPU and freeing up CPU resource that could be used to host additional virtual machines or applications.

I/O virtualization vs. other networking technologies

Ethernet Data Center Bridging (DCB) and Fibre Channel over Ethernet (FCoE) are a pair of technologies that are young, but slightly more mature than I/O virtualization in today's marketplace. Together, DCB and FCoE allow for hardware consolidation by combining lossless Ethernet with Fibre Channel at the switch and at the host adapter. This DCB/FCoE technology combination provides some of the same type of consolidation that I/O virtualization provides, but it's actually complementary to IOV. Because the DCB/FCoE converged adapters run on the PCI Express bus, they can be used in an I/O virtualization environment and, therefore, could be shared across multiple servers. The host adapters that support DCB and FCoE currently support or will soon support IOV technologies such as Single-Root IOV (SR-IOV). An IOV environment can communicate with existing Ethernet, Fibre Channel and DCB/FCoE switches using existing adapters and, as far as the host servers are concerned, they're connected directly to those switch environments.

InfiniBand is another high-speed, low-latency network technology that's typically used in compute cluster environments for server-to-server communication. InfiniBand provides faster speeds than Ethernet today. The newer InfiniBand host adapters, known as host channel adapters (HCAs), run on the PCI Express bus and can support I/O virtualization. In addition, some vendors are developing I/O virtualization solutions built around InfiniBand technology, using InfiniBand as the high-speed carrier for the IOV infrastructure.

Current I/O virtualization products

The general IOV approach that most current products take is to connect the local host servers into a top-of-rack unit that holds a variety of network, storage and graphics adapters that can act as a dynamic pool of I/O connectivity resources. The top-of-rack device acts as an I/O fabric for the servers in the rack, and can communicate with other servers in the rack or can connect to end-of-row switches for more distant resources. These IOV top-of-rack units may be less expensive than some of the newer high-speed top-of-rack switches.

Click here to view a PDF of the I/O virtualization product sampler.

Two specific implementation models for I/O virtualization are emerging: PCIe- and InfiniBand-based approaches.

One approach to IOV is to extend the PCI Express bus out of the server chassis and into a separate box or chassis populated with IOV-capable adapters that can be shared across multiple servers. The I/O virtualization box would be installed in a rack and would function somewhat similarly to a top-of-rack switch, except that instead of only supporting Ethernet or Fibre Channel, this IOV box would act as a type of fabric switch for all LAN, SAN, DAS and possibly graphics traffic. At least three companies are working on products that extend the PCI Express bus into a separate box for the purpose of virtualizing I/O adapters. One advantage to this approach is that servers today already support PCI Express. Some IOV vendors now have first-generation products available and some are publicly discussing products that will appear this year. Some of these products require support for SR-IOV or Multi-Root IOV (MR-IOV), but others don't have that requirement. These products are built around the PCI Express 2.0 specifications, and vendors already have PCI Express 3.0 plans in their product roadmaps.

Aprius Inc. is a small vendor that's building a PCI Express gateway device that will support almost any type of PCI Express adapter (including network cards, storage controllers and graphics coprocessors) that can then be shared across multiple servers. These adapters basically form an I/O resource pool that can be dynamically assigned to physical or virtual servers.

NextIO is a company that was involved with developing the PCI-SIG I/O virtualization specifications and had some IOV products as early as 2005. NextIO is working in several areas, including the high-performance computing (HPC) market and is interested in virtualizing graphics coprocessing in addition to traditional networking and storage I/O traffic. They're partnering with several big name vendors for a variety of IOV applications.

VirtenSys Inc. extends the PCIe bus with its I/O virtualization switches that can virtualize the major types of server networking and storage connectivity, as well as interprocessor communication (IPC) for HPC compute cluster environments.

Another approach to I/O virtualization is to use an existing network interconnect technology such as InfiniBand or 40 Gb Ethernet as the transport for virtualizing I/O adapters. Two companies are building products to handle IOV in this fashion:

Mellanox Technologies Ltd., well-known for its InfiniBand products, provides its I/O consolidation solutions using either InfiniBand or 10 Gb Ethernet (10 GbE) as the transport for performing IOV. They're also building 40 Gb Ethernet adapters that are compliant with SR-IOV.

Xsigo Systems Inc. uses InfiniBand HCAs that connect to its I/O Director that provides the infrastructure for IOV-capable adapters. One reason for using InfiniBand is its high speed and very low latency. Inside the I/O Director are the same PCI Express network and storage adapters that would otherwise be installed in each host server. Xsigo's I/O Director has been available for approximately two years, and the company has established partnerships with a number of storage vendors, including Dell Inc. and EMC Corp.

Many network and storage adapter vendors are working on full support for I/O virtualization, especially for compliance with the SR-IOV and/or MR-IOV specifications. The vendor roster includes Emulex Corp., Intel Corp., LSI, Neterion Inc., QLogic Corp. and others. The big server vendors, including Dell, Hewlett-Packard (HP) Co. and IBM Corp., are beginning to demonstrate solutions that support I/O virtualization, either in their rack servers or blade servers, or both. Cisco Systems Inc. has also joined the movement with its Cisco UCS M81KR Virtual Interface Card. The big processor vendors, Advanced Micro Devices (AMD) Inc. and Intel, include virtualization technologies that help enable some of these IOV functions.

How and when to implement I/O virtualization

Implementation of I/O virtualization technologies will most likely be a slow, deliberate process. That's because the work to make all the adapters function in this manner isn't complete yet, and because the top-of-rack IOV units are still in their early stages. For I/O virtualization to work properly, development work needs to be completed on the adapter hardware and firmware, drivers, operating systems and hypervisors. Several vendors will be announcing support for various forms of IOV in 2010, and it's anticipated that IOV will emerge as one of the top new technologies for the year. However, expect I/O virtualization to take a few years to become commonplace.

Look for 10 Gb Ethernet adapters to be the first to fully support IOV. Demonstrations of IOV-capable 10 GbE adapters were shown publicly in 2009 at a number of trade shows. After the Ethernet adapters, you can expect to see storage adapters such as Fibre Channel HBAs, FCoE CNAs and SAS/SATA non-RAID adapters to support I/O virtualization. The last category of storage adapters that will likely fully support IOV are the RAID controllers, due to the complexity of sharing RAID functions across servers. Separately, some graphics coprocessor adapters will support IOV, with some products possibly available in 2010.

Hairpin turns

In an I/O virtualization (IOV)-capable environment, traffic can be sent out of one virtual adapter and into another virtual adapter without regard to the underlying physical hardware. This leads to the very interesting possibility that traffic could be entirely contained in a single physical adapter, which is known as the IOV "hairpin turn." The application for this might be a virtual machine (VM) communicating with another virtual machine through their respective virtual network interface cards (NICs), where the virtual machines reside on the same physical server and the virtual NICs reside on the same physical NIC. In this case, the physical NIC is functioning as a mini-switch. This analogy also works with the top-of-rack IOV units, where the physical adapter is external to the server, but functions in the exact same way.

A storage adapter could be made to operate the same way. Suppose a host server had a Fibre Channel, iSCSI or SAS adapter that was located in a top-of-rack IOV unit. A storage server could be located in the same rack and could, theoretically, use the same storage adapter in the IOV unit as its adapter to the outside world. The physical adapter in the IOV unit would have one virtual adapter configured as the initiator and another virtual adapter configured as the target. Interesting possibilities indeed!

Implementing IOV-capable adapters will require top-of-rack I/O virtualization units and either PCIe bus extender cards or InfiniBand HCAs for the host servers, depending on the implementation. The IOV-capable adapters are then placed in the top-of-rack IOV units and can be shared across servers. Drivers for these adapters will be needed, and few production-ready drivers for any operating system are currently available.

I/O virtualization should be implemented in stages, as with the adoption of any other new technology. The IOV implemented stages should begin with pilot tests run on a small number of servers; the pilot implementation should run until the products operate in a stable manner and benefits can be shown. The Demartek lab will be testing various IOV solutions during 2010, and we'll be able to provide first-hand commentary and results.

A good candidate environment for I/O virtualization might be a virtual server environment that would benefit from sharing some higher-end 10 GbE NICs or similar high-speed adapters. One of the goals of IOV implementations may be to acquire the necessary I/O adapters based on the overall bandwidth needs of all the servers in a rack, rather than simply buying adapters based on raw server count. This will require adjustments to the planning process to account for applications and bandwidth usage, and may require more bandwidth measurements to be taken in the current environments.

Management issues with I/O virtualization

Managing virtual pools of I/O resources will require some new thinking. The adjustment is similar to what was required to effectively manage storage systems when SANs and virtualized storage solutions were first deployed. You'll need to understand that the I/O adapters and paths will no longer be exclusively owned by a particular server, in the same way that storage on a SAN isn't owned by a specific server. Rather, these adapters and paths will be dynamically assigned to servers, and can be released or adjusted as needed. Each of the vendors providing top-of-rack IOV units will have their own management interface for the I/O virtualization unit itself, and some level of adapter management. In addition, each of the adapter manufacturers will provide some basic element manager, similar to what's provided today.

It remains to be seen how the operating systems and hypervisors will view these virtualized I/O adapters. Because ownership of the adapters will no longer be tied to a particular operating system or hypervisor, the management of these IOV resources will have to be aware that these resources can logically move around in the data center and that the I/O resources can have multiple personalities.

BIO: Dennis Martin has been working in the IT industry since 1980, and is the founder and president of Demartek, a computer industry analyst organization and testing lab.

This was first published in February 2010

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close