This article can also be found in the Premium Editorial Download "Storage magazine: 2009 Storage Products of the Year."
Download it now to read this article plus other related content.
Storage network virtualization, also known as I/O virtualization (IOV) or I/O consolidation, comprises an emerging family of technologies that extend the concept of virtualization to the major types of input and output (I/O) handled by today's servers.
In recent years, data centers have been deploying server and storage virtualization technologies to more efficiently use underutilized computing assets and to create more flexible infrastructures. In decoupling the logical function from the physical hardware, virtualization allows hardware to be pooled and shared, thus improving utilization. Once in place, virtualization makes new server or storage deployments much quicker and easier, while making it easier to effect changes to the existing infrastructure. For example, it's much easier to deploy new virtual servers than physical servers. And when storage systems are virtualized, many of the data migration issues related to new array deployments can be avoided by adding the new capacity to the existing pool of storage resources.
Virtualization has long been applied to a number of different computing technologies. Although storage virtualization has its roots in the mainframe world, it's only now beginning to gain wider adoption. Server virtualization, on the other hand, has become the
But I/O virtualization isn't exactly a brand new idea either, with virtualization concepts already being used for some network I/O technologies today. For example, a virtual local-area network (VLAN) separates the logical and physical aspects of a network so that one physical network appears as and can be managed as several smaller logical networks. Network interface card (NIC) teaming combines two or more network adapters and makes them appear to function as a single adapter with increased bandwidth. In both cases, logic in the hardware and management software layers allows the decoupling of the logical functions from the physical hardware, making it possible to carve up the hardware and share it as separate units, or to combine it to present it as one larger unit.
PCI Express and I/O virtualization
A server in an enterprise data center typically needs access to a LAN, a storage-area network (SAN) and local direct-attached storage (DAS). Some servers also need access to high-end graphics processing. A server's access to these resources usually comes by way of an internal system bus. In a newer multicore physical server with a high-speed PCI Express (PCIe) bus, all of these I/O "pipes" occasionally hit peak bandwidth, but rarely simultaneously or on a sustained basis (see "PCI Express boosts I/O virtualization," below). With many virtualized servers running on a single physical server, these I/O pipes are busier, but aren't likely to be running at full bandwidth simultaneously or on a sustained basis.
|PCI Express boosts I/O virtualization|
The PCI-SIG, the special interest group responsible for PCI Express (PCIe) industry-standard I/O technology, announced the completion of the PCI-SIG I/O virtualization (IOV) suite of specifications in June 2008. These specifications enable virtualization solutions to tackle the most I/O-intensive workloads by removing performance bottlenecks in both software and hardware virtualization components. The IOV suite provides a set of technologies that can be used by providers of processors, chipsets and I/O fabrics, and has implications for hypervisors and operating systems. These technologies provide:
This set of specifications promises to trigger new virtualization solutions that provide improved performance, lower power consumption and new terminology that will change the way we view I/O to and from a server.
This was first published in February 2010