Virtual I/O for storage networks

Virtualizing network resources can help reduce the contention for services and significantly improve performance.

This Content Component encountered an error
This article can also be found in the Premium Editorial Download: Storage magazine: Solid-state storage guide:

Virtualizing network resources can help reduce the contention for services and significantly improve performance.

The network -- both storage and IP -- is the next target in the march toward a totally virtualized data center. Virtual I/O is more than a nice-to-have feature; it’s essential to providing a more economical infrastructure that can meet the new I/O demands being placed on storage systems by server and desktop virtualization. Virtual I/O can be implemented in the host that’s connecting to the network and storage system, or it can be deployed in the infrastructure. Where it’s implemented may slightly alter the definition of virtual I/O. We’ll look at both approaches and describe how they’re different, as well as how they can work together.

Today’s server/host barely resembles its predecessors from four or five years ago. In the past, when a single server supported a single application, all its I/O capabilities where dedicated to that application. In the virtualized data center, the network interface cards (NICs) and storage host bus adapters (HBAs) in a host system are now shared across multiple virtual machines (VMs). In a traditional server architecture, those two I/O adapter types (NICs and HBAs) were separate from each other. Now, thanks to iSCSI, network-attached storage (NAS) and Fibre Channel over Ethernet (FCoE), they’re becoming unified (or “converged”) and potentially can all run on the same physical adapter card inside the host.

@pb

Welcome to the I/O blender

With virtualization and convergence, each VM now has to compete for I/O resources, forcing the adapter to handle multiple types of network and storage I/O traffic. Advances like 10 Gbps Ethernet, 10 Gbps FCoE, and 8 Gbps or 16 Gbps Fibre Channel (FC) provide plenty of bandwidth to meet the demands being placed on the host from all those virtual machines. The challenge is ensuring that the right VM gets the right amount of available bandwidth at the right time.

Three routes to virtual I/O

All I/O virtualization methods have the same goal: to reduce bottlenecks created by virtualized servers contending for network resources en route to the storage system. While there’s one goal, there are three general methods of virtualizing storage I/O:

  1. At the network adapter
  2. At the storage network switch
  3. Using an I/O gateway device

This right VM/right amount/right time process is increasingly important as server virtualization reaches its third phase: the virtualization of demanding, mission-critical applications. The first phase of virtualization typically involved test and development servers. The second was low-priority, low-demand servers. For those phases, a simple interrupt-driven, equal distribution of I/O across VMs was acceptable.

As mission-critical and performance-demanding applications are virtualized, I/O can’t be simply shared among VMs with each one treated equally. Certain VMs need to be guaranteed a higher class of service, and interrupts adversely impact CPU utilization. Predictability becomes a critical factor contributing to the successful migration of production applications to the virtual environment.

One way to address the need for predictable I/O for mission-critical workloads is to install one NIC or HBA per VM and hard set each interface card to the mission-critical VMs. That’ll work, but it’s neither cost effective nor space practical, and would eventually limit the number of VMs that can run on each host.

Another alternative is to massively overprovision available storage and network bandwidth so the host has more than enough I/O to handle all the performance demands of the various VMs it’s supporting. But that approach isn’t very cost-effective or efficient since most virtual machines don’t need full I/O performance at all times. In addition, there’s an efficiency loss in the interrupt driven, “round-robin” queuing scheme the hypervisor would utilize to share the available bandwidth.

@pb

Virtual I/O at the network adapter

I/O virtualization at the network adapter level, offered by companies like Brocade Communications Systems Inc., Emulex Corp. and QLogic Corp., requires fewer high-speed adapters be provisioned and shared across a larger number of virtual machines, yet still guarantees the correct service levels for mission-critical VM apps. For example, a 10 GigE network adapter that’s virtual I/O-capable has the ability to be either divided into multiple virtual adapters or to have its bandwidth allocated on a percentage basis to predetermined groups of VMs.

In that scenario, a single 10 GigE virtual card could be divided into 10 1 GigE virtual cards. One of these virtual cards could be dedicated to virtual machine migration activities, a few may be dedicated to specific VMs that need guaranteed performance levels and the remaining “cards” could be shared as a combined pool across all the remaining virtual machines.

Because this is all done in hardware, the burden on the virtualization hypervisor is significantly reduced, which should return CPU resources to the host. In other words, the CPU cores don’t need to be interrupted to manage I/O sharing. Not only does the virtual I/O itself allow for greater VM density, it returns the CPU horsepower to support that density.

Virtualizing storage I/O definitions

Another feature that’s appearing on these virtual I/O cards is the ability to create a virtual switch on the card. This is valuable in virtualized server environments in particular and can greatly reduce network traffic flowing out of the server. With this capability, two VMs on the same host could communicate directly with each other (a very common requirement). Instead of this traffic going all the way out to the physical switch, the virtual switch isolates the local traffic inside the physical host. This again helps internal virtual machine performance and improves overall network efficiency.

Finally, network adapters that can provide I/O virtualization have the ability to virtualize the type of storage protocol used. For example, some of these cards support FC, FCoE, 10 Gbps Ethernet and iSCSI. A virtual I/O adapter should be able to reconfigure port usage on-the-fly without interrupting servers or virtual machines. Today, some adapters require a reboot, but that’s expected to change.

@pb

Virtual I/O at the infrastructure level

Another area where I/O can be virtualized is in the infrastructure itself. This infrastructure virtualization could work in conjunction with virtualized network adapters or by itself. There are two types of I/O virtualization found in the infrastructure. The first is a virtualization of the switch infrastructure, basically an extension of virtual I/O at the adapter. The second is a gateway type of device that delivers broad I/O virtualization and is essentially a private I/O fabric, commonly called an I/O gateway.

Virtual I/O at the switch. Being able to control and allocate network bandwidth inside the host with virtual I/O network adapters certainly has significant value, but much of that optimization could be lost if the switch infrastructure doesn’t know how to manage it. Companies like Brocade and Cisco Systems Inc. offer switches that support virtual I/O and enable specific VMs to be guaranteed a certain level of performance throughout the rest of the network. At the switch layer, virtual machines can be identified and given certain policy settings, including those for performance characteristics. These are typically a low, medium or high quality of service, or a percentage of total bandwidth available.

But virtual I/O policy management isn’t limited to performance. Security and other settings can be configured per VM rather than per physical port. This is ideal in a virtualized server environment so that network settings will follow each VM when it’s migrated from one host to another.

What’s most interesting is that some vendors are working to provide a virtual I/O solution that allows for communication between the switch and the NIC so that policies set at the card level flow through the entire infrastructure between hosts and switches. Without this communication, a VM on a host that’s configured to receive 25% of the available network I/O bandwidth could lose this priority access when migrated to a secondary host. Virtual I/O at the switch layer will allow these types of configuration settings to follow the VMs as they’re moved around the environment.

Finally, some switches can even virtualize themselves. In this scenario, multiple independent switches installed in the network would appear to be one large switch. This allows for much simpler configuration and policy management because each individual switch doesn’t have to be logged into and managed. Virtualization across switches also provides greater availability management if one of the switches in the group fails.

@pb

Virtual I/O gateways. Virtual I/O gateways, offered by firms such as Virtensys Ltd. (Virtensys is being acquired by Micron Technology Inc.) and Xsigo Systems Inc., could be thought of as switch-like appliances into which storage and network interface cards are installed and then presented as shared resources to the network. When used in this manner, the data center is basically installing a private fabric for server communication. To some extent a virtual I/O gateway can be thought of as an extended bus architecture where a PCI Express (PCIe) type of connection is extended from the server to the I/O gateway, except that bus is sharable between hosts.

A card is installed in the server that connects to the I/O gateway. It may be a PCIe extension card, but some vendors use InfinBand adapters while others may use 10 Gbps Ethernet adapters. The objective is to install something in the server that’s relatively low cost yet high performance because it will be augmenting the PCI bus.

The key difference between virtual I/O gateways and virtual I/O on the network adapter is that the virtual I/O gateway can share a single interface card across multiple servers. This provides some significant advantages in connectivity and resource optimization.

Cards that go into the I/O gateway, depending on the vendor, are either proprietary cards or off-the-shelf PCIe cards. Proprietary cards usually have better multihost sharing capabilities built into them. Gateways that use off-the-shelf PCI cards may provide a greater sense of flexibility but are restricted to the sharing capabilities built into the modern day PCI card, which right now is limited.

Another benefit of virtual I/O gateways is protection from future upgrade requirements.

Because the card or software driver provided by the I/O gateway vendor becomes the common denominator in all servers, the ability to move between different network and storage protocols and technologies becomes very easy.

For example if the current storage system connects to the servers via Fibre Channel, a new iSCSI storage system would require that these hosts replace or add to their FC interface cards with Ethernet NICs. (The exception would be servers using the virtual I/O adapters described above.) With a virtual I/O gateway-type of configuration, the I/O gateway card installed on the server would remain the same and a shared iSCSI card would be installed into the gateway. This would let a single card in the host perform both functions. The only change required to the server would be to its software configuration -- there wouldn’t be a need to physically change interface cards in each host. This not only delivers the flexibility to move among network types and protocols, but requires less server downtime to make the changes.

@pb

Virtual I/O selection strategies

Determining which virtual I/O strategy makes the most sense for your data center will depend largely on your immediate needs along with your long-term goals. For example, if the primary concern is to improve storage and network I/O performance at the host layer, it makes sense to purchase a card that can provide network interface-level I/O virtualization instead of buying a basic 10 GigE NIC. This would allow you to more efficiently use the 10 Gbps bandwidth and guarantee levels of service to particular mission-critical workloads.

If you’re in the process of refreshing or expanding the network or storage infrastructure, then adding components that understand I/O virtualization deserves serious consideration. Virtual I/O at the switch layer can be looked at as more of an incremental upgrade and a perfect complement to an eventual virtual I/O network interface card implementation.

Virtual I/O gateways, or private I/O fabrics, also deserve serious consideration for companies looking to refresh their existing infrastructures, improve performance and provide greater flexibility. These products can provide “future proofing” against the ever-changing I/O market.

Regardless of the path chosen, virtual I/O should deliver significantly more flexibility and a more dynamic infrastructure that’s able to keep up with the demands of the server infrastructure. All three methods should allow for better return on the I/O investment and provide performance guarantees for mission-critical applications, which should extend the ROI on the server virtualization project.

BIO: George Crump is president of Storage Switzerland, an IT analyst firm focused on storage and virtualization.

This was first published in March 2012

Dig deeper on Enterprise storage, planning and management

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close