Magazine

Virtual I/O for storage networks

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Solid-state storage guide."

Download it now to read this article plus other related content.

Welcome to the I/O blender

With virtualization and convergence, each VM now has to compete for I/O resources, forcing the adapter to handle multiple

Requires Free Membership to View

types of network and storage I/O traffic. Advances like 10 Gbps Ethernet, 10 Gbps FCoE, and 8 Gbps or 16 Gbps Fibre Channel (FC) provide plenty of bandwidth to meet the demands being placed on the host from all those virtual machines. The challenge is ensuring that the right VM gets the right amount of available bandwidth at the right time.

Three routes to virtual I/O

All I/O virtualization methods have the same goal: to reduce bottlenecks created by virtualized servers contending for network resources en route to the storage system. While there’s one goal, there are three general methods of virtualizing storage I/O:

  1. At the network adapter
  2. At the storage network switch
  3. Using an I/O gateway device

This right VM/right amount/right time process is increasingly important as server virtualization reaches its third phase: the virtualization of demanding, mission-critical applications. The first phase of virtualization typically involved test and development servers. The second was low-priority, low-demand servers. For those phases, a simple interrupt-driven, equal distribution of I/O across VMs was acceptable.

As mission-critical and performance-demanding applications are virtualized, I/O can’t be simply shared among VMs with each one treated equally. Certain VMs need to be guaranteed a higher class of service, and interrupts adversely impact CPU utilization. Predictability becomes a critical factor contributing to the successful migration of production applications to the virtual environment.

One way to address the need for predictable I/O for mission-critical workloads is to install one NIC or HBA per VM and hard set each interface card to the mission-critical VMs. That’ll work, but it’s neither cost effective nor space practical, and would eventually limit the number of VMs that can run on each host.

Another alternative is to massively overprovision available storage and network bandwidth so the host has more than enough I/O to handle all the performance demands of the various VMs it’s supporting. But that approach isn’t very cost-effective or efficient since most virtual machines don’t need full I/O performance at all times. In addition, there’s an efficiency loss in the interrupt driven, “round-robin” queuing scheme the hypervisor would utilize to share the available bandwidth.

This was first published in March 2012

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: