Q

Which devices do I need for I/O virtualization?

Jon Toigo explains how I/O virtualization can simplify the server configuration used to host VMs and its role as an enabling technology for I/O processing.

Which devices do I need for I/O virtualization?

Jon Toigo

Technically speaking, the function of I/O virtualization is to abstract upper layer protocols away from physical hardware such as network interface cards (NICs) and host bus adapters (HBAs).

This enables virtual NICs (vNICs) and virtual HBAs (vHBAs) to be substituted for physical equivalents, which simplifies server configurations and helps to reduce their power draw.

Instead of fitting each server with multiple I/O devices, I/O virtualization requires one I/O adapter (or two, for redundancy and high availability) to provide shared transport for all network and storage connections. These physical I/O adapters (often leveraging technologies such as InfiniBand or 100 Gigabit Ethernet for faster operating speeds and greater bandwidth) offload LAN and SAN I/O to an external device, sometimes called an I/O Director. The I/O Director provides protocol layer functions and directs LAN and SAN traffic to the appropriate conventional switch-based network or fabric targets such as storage arrays or other servers.

Benefits of this approach include streamlining I/O traffic and a reduction in the disruption caused by server virtualization and consolidation on existing LANs and SANs. Since most traffic emanating from a server is directed at other servers, equipping all servers with I/O virtualization technology can actually remove significant LAN traffic, since all intra-server messaging is accomplished by the I/O Director and cabling without ever using the LAN.

Moreover, vNICs and vHBAs enable smooth "failover" or "template cut-and-paste" techniques provided by server virtualization products to migrate virtual machines (VMs) from one server host to another. Instead of provisioning every server with the expansion cards that might be required by the most demanding application that potentially lands there, vNICs and vHBAs can travel with VM workloads to any physical host.

Buffering I/O by directing it first to an I/O Director helps mitigate the impact of server consolidation on existing LAN and SAN I/O pathways. That can be a blessing in environments where server virtualization is wreaking havoc with pre-existing networks and fabrics because it insulates LANs and SANs from the impact of server workload consolidation and the resulting traffic patterns for which the networks/fabrics were not designed.

Ultimately, I/O virtualization can have a dramatic impact on the cost of servers (by eliminating a significant number of devices) and networks (by reducing the number of ports required on switches). This was the reason network switch vendors such as Cisco Systems, Juniper and Brocade were slow to warm up to I/O virtualization newcomers such as Xsigo Systems when they appeared in the market several years ago. However, as Xsigo's acquisition by Oracle announced in July 2012 illustrates, the pioneers are being absorbed by vendors of solution stacks so that their technology can be integrated into future branded hardware/software kits.

About the author:
Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International and chairman of the Data Management Institute.

This was first published in May 2013
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

Have a question for an expert?

Please add a title for your question

Get answers from a TechTarget expert on whatever's puzzling you.

You will be able to add details on the next page.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close