Problem solve Get help with specific problems with your technologies, process and projects.

I/O virtualization: How to optimize your data center operations

We examine I/O virtualization products; how I/O virtualization systems virtualize NICs and HBAs; and using Ethernet, InfiniBand or PCIe to connect NICs to servers.

What you'll learn: I/O virtualization can help storage managers optimize their data center operations. This tip examines the options and best practices for initiating I/O virtualization to reduce bottlenecks and the physical infrastructure of your data center.

I/O virtualization, like server virtualization, adds an abstraction layer to simplify and optimize data center operations. In the case of I/O virtualization, there's an abstraction between the servers accessing interface cards and the actual cards themselves. The goal is to be able to share those cards across multiple servers.

The technology counts on the assumption that most data center servers can't utilize network interface cards (NICs) to their maximum capabilities at all times. I/O virtualization attempts to better utilize the available bandwidth by allowing more servers access to each individual card. It's important to note that I/O virtualization doesn't provide more bandwidth to the servers, it just ensures that more of the available bandwidth is used. Features like quality of service (QoS) and N_Port ID Virtualization (NPIV) can help ensure that critical applications are getting a guaranteed level of performance.

I/O virtualization products

Let's take a look at some of the I/O virtualization products on the market today:


Three ways to share cards in an I/O virtualization system

The first step in initiating I/O virtualization is to understand what methods these systems use to virtualize NICs and host bus adapters (HBAs). There are currently three ways to share each card that's placed inside an I/O virtualization system.

The first approach is one where individual servers take turns using an interface card. This approach provides value when it comes to the use of expensive application-specific cards that are only used at certain points of the day.

A second approach is to use multiple-port cards that allow each port to be individually addressed. While this doesn't increase bandwidth utilization, because each server has its own port and bandwidth, it does reduce costs. For example, a quad-port card is typically less expensive per port than purchasing four individual cards. The challenge, however, is that most servers can't take advantage of that many I/O ports. I/O virtualization solves that problem by sharing the card on a per-port basis, leveraging the cost savings of the multi-port card while allowing each port to improve, but not be fully utilized.

The final and most ideal option is to select NIC cards that support Single Root I/O Virtualization (SR-IOV). These cards have the intelligence to enable multiple hosts to share a single card and are ideal for I/O virtualization systems. There are a few 10 Gigabit Ethernet (10 GbE) cards available and most of the next generation of Fibre Channel over Ethernet (FCoE) cards will support this standard. Depending on the system, a card with SR-IOV should be able to share the bandwidth of that card selectively across multiple servers.

SR-IOV cards in an I/O virtualization system work under the assumption that different servers will have peak needs at different times. If two servers need significant I/O resources and are using 6 Gb of a 10 Gb segment, the remaining servers are only using a fraction of that and can comfortably service I/O needs. Some I/O virtualization systems have the ability to spill over I/O to a spare card in the event that bandwidth utilization needs saturate the throughput of one card.

I/O virtualization: The price of HBAs and NICs
Does $40,000 seem steep for 10 HBAs and 10 NICs? The individual price depends on the card. A quality server-class 10 GB card from Intel Corp. lists for $1,600. Most decent-performing cards will list in the range of $995, and there are even cheaper cards in the $500 range.

8 GB Fibre Channel (FC) cards list for approximately $1,750 to $2,000, and each server in the system would need at least two of each card. By using I/O virtualization as a repository for redundant cards as described, this can be reduced to one of each card if used with a $500 PCIe or Ethernet card.

Three ways to connect NICs to servers: Ethernet, InfiniBand and PCIe

After selecting the cards that will be shared, the next step is to select the connection method to the servers that will have access to the I/O virtualization system. Presently, there are three competing methods to accomplish this; Ethernet, InfiniBand and PCIe.

At first glance, PCIe seems to be the most natural fit given that PCIe cards are being shared. However, PCIe wasn't truly designed as a networking standard outside of the confines of a physical server.

InfiniBand and Ethernet were both designed to be networked, but weren't designed to transport PCIe traffic. InfiniBand has the performance capabilities to support PCIe bandwidth requirements but its adoption rate, other than in back-end interconnects, has been relatively low.

Ethernet, on the other hand, is ubiquitous and very networkable. For it to carry PCIe traffic today requires special logic be added to an Ethernet card. In the future, this capability could be built into standard Ethernet cards. Selection of the connection criteria needs to be done with a careful examination of each method. The organization needs to decide which method will provide the performance needed, the networking scalability needed and the familiarity of what is already in use in the organization. While cost varies greatly, the PCIe implementation should be the least-expensive option with Ethernet as a close second.

How to determine which servers to include in your I/O virtualization system

The final step is a determination of which servers would be the best possible candidates for inclusion into the I/O virtualization system. With the first two sharing methods-- one card one server or one port one server -- the bandwidth to the server is locked in. In the shared card method with SR-IOV, the biggest concern is that a server could starve out the other servers based on its I/O resource consumption. While some of the systems can prevent that from happening, it could mean that an individual server wouldn't be given access to the bandwidth it needs to keep the other servers sustained.

The best practice is to identify the few servers in the environment that may have these high demands and maintain a separate connection to them. You could also leverage a standby card and let I/O needs spill over as mentioned above. Most I/O virtualization systems will let you dynamically assign a specific card to a specific server in the event you know there's a performance spike on the horizon.

One of the safest ways to start with I/O virtualization is to use it as a repository for redundant cards, since most servers have redundant network and storage connections. In a 10-server rack that can mean up to 20 extra cards, which could add up to $40,000 to provide I/O redundancy in some cases.

A starting point for I/O virtualization is to then move one or two of these redundant cards to the system and then not purchase secondary cards for new servers. This can be done by simply mapping the secondary connection to the cards in the I/O virtualization system.

Typically, when a network or storage connection fails, all of the cards don't fail in all the servers. Instead, one card, a cable connection or an SFP connector on the switch fails. That one server could then map directly to the card in the system and continue operating until the primary card is replaced.

BIO: George Crump is the lead analyst at Storage Switzerland, an IT analyst firm focused on the storage and virtualization segments.

Dig Deeper on Storage virtualization

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.