I/O virtualization products, standards reduce network infrastructure headaches

I/O virtualization products, and standards like the PCI-SIG SR-IOV and MR-IOV, reduce network and storage physical infrastructure needs while easing administrative tasks.

Virtualization and blade server technologies have enabled a generation of consolidated computing devices capable of cramming extraordinary computing power into smaller form factors. But the increased processing power per square inch has brought about a new I/O problem: The pipes can't move data fast enough to keep up with today's processors. To address that problem, new I/O virtualization products and standards are emerging to extend PCI Express (PCIe) pathways to separate I/O devices. This allows multiple physical servers and virtual machines (VMs) to share I/O resources.

More on I/O virtualization
Xsigo's I/O virtualization approach: Software virtualization via I/O Director

Alternative approach to I/O virtualization: PCIe bus extenders

I/O virtualization and Fibre Channel over Ethernet (FCoE): How do they differ?

I/O virtualization vendors are following one of two general approaches. The first is software virtualization of the entire I/O process, which is the path chosen by Xsigo Systems Inc. The second approach is an extension of a server's existing internal PCIe pathway to a physically separate device – also called a "card cage" -- that houses multiple I/O cards, including Gigabit Ethernet (GbE) and 10 GbE network interface cards (NICs), host bus adapters (HBAs) and SAS adapters. This approach has been adopted in various ways by several vendors, including Aprius Inc., NextIO Inc. and VirtenSys Inc.

Industry standard bodies are getting involved as well. The PCI Special Interest Group (PCI-SIG), which handles PCI specifications, has developed and formalized two standards specific to IO virtualization: Single-Root IOV (SR-IOV) and Multi-Root IOV (MR-IOV).

According to the PCI-SIG, SR-IOV allows multiple guest operating systems to simultaneously access an I/O device without requiring a hypervisor on the main data path. MR-IOV builds upon the SR-IOV standard by allowing access to PCI- or SR-IOV-compliant I/O devices over a shared PCIe fabric. The goal of the standards is to enable multiple separate servers to access and share multiple I/O cards inside one or more card cages. Both SR-IOV and MR-IOV meet that goal, but neither standard has seen any significant vendor adoption.

While vendors seem to be eschewing industry standards, that hasn't stopped their progress in the market; they're pushing proprietary solutions and are starting to find market traction and early adopter customers.

Dig Deeper on Storage for virtual environments

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.