This article can also be found in the Premium Editorial Download "Storage magazine: Solid-state storage guide."
Download it now to read this article plus other related content.
Welcome to the I/O blender
With virtualization and convergence, each VM now has to compete for I/O resources, forcing the adapter to handle multiple
Three routes to virtual I/O
All I/O virtualization methods have the same goal: to reduce bottlenecks created by virtualized servers contending for network resources en route to the storage system. While there’s one goal, there are three general methods of virtualizing storage I/O:
- At the network adapter
- At the storage network switch
- Using an I/O gateway device
This right VM/right amount/right time process is increasingly important as server virtualization reaches its third phase: the virtualization of demanding, mission-critical applications. The first phase of virtualization typically involved test and development servers. The second was low-priority, low-demand servers. For those phases, a simple interrupt-driven, equal distribution of I/O across VMs was acceptable.
As mission-critical and performance-demanding applications are virtualized, I/O can’t be simply shared among VMs with each one treated equally. Certain VMs need to be guaranteed a higher class of service, and interrupts adversely impact CPU utilization. Predictability becomes a critical factor contributing to the successful migration of production applications to the virtual environment.
One way to address the need for predictable I/O for mission-critical workloads is to install one NIC or HBA per VM and hard set each interface card to the mission-critical VMs. That’ll work, but it’s neither cost effective nor space practical, and would eventually limit the number of VMs that can run on each host.
Another alternative is to massively overprovision available storage and network bandwidth so the host has more than enough I/O to handle all the performance demands of the various VMs it’s supporting. But that approach isn’t very cost-effective or efficient since most virtual machines don’t need full I/O performance at all times. In addition, there’s an efficiency loss in the interrupt driven, “round-robin” queuing scheme the hypervisor would utilize to share the available bandwidth.
This was first published in March 2012