Tip

Should you bypass the hypervisor for storage I/O?

What you will learn in this tip: The various approaches you can use to allow storage I/O to skip the hypervisor, as well as the pros and cons of skipping the

Requires Free Membership to View

hypervisor and sticking with your current technology setup.

When architecting a data storage environment for virtual servers, most IT pros rely on the hypervisor as an intermediary. Whether you use Fibre Channel (FC), iSCSI or NFS, storage is first presented to the hypervisor and then allocated to virtual machines (VMs). But other options exist, including raw device mapping (RDM) and VMDirectPath, as well as software iSCSI initiators and network-attached storage (NAS) clients. But does it make sense to allow virtual machines to mount storage directly, bypassing the hypervisor?

Ways for storage I/O to skip the hypervisor

It seems logical that passing storage I/O through a hypervisor would be inefficient and introduce latency. After all, both VMware ESX and Microsoft Hyper-V first allocate capacity to a proprietary file system (VMware Virtual Machine File System [VMFS], and CSV or NTFS, respectively) before re-allocating it for virtual machine disk (VMDK) files. So it’s tempting to want to bypass the hypervisor and its file system and pass storage I/O directly to the guest virtual machine.

There are a number of ways for storage I/O to skip the hypervisor. Virtual server admins have long been tempted by RDM or pass-through LUNs (in VMware and Microsoft parlance, respectively), but adoption in production has been slow. VMware recently introduced VMDirectPath, which leverages Advanced Micro Devices (AMD) Inc.’s IOMMU or Intel Corp.’s VT-d technology. These options leverage unique features of the hypervisor to allow block storage I/O to flow uninterrupted.

Guest iSCSI is another option, with software (referred to as “initiators”) widely available for Linux and Windows clients. This concept requires no special hypervisor features. As long as the guest machine can “see” an iSCSI array in the IP network, it can access storage just like any physical machine.

Mounting NAS filers using NFS or SMB is a similar approach, with every operating system supporting a number of protocols. It's less common to rely on NAS for high-performance applications at this point, but the introduction of NFSv4, parallel NFS, and SMB 2.0 and 2.1 open the door to using NAS for any and all data storage purposes.

Reasons to avoid bypassing the hypervisor

Still, there are many reasons to avoid bypassing the hypervisor. One reason is because the overwhelming majority of virtual servers still rely on hypervisor file systems for storage provisioning. These mechanisms are more efficient in theory, and none has proven to deliver better performance in practice. Plus, storage architecture is about much more than raw performance, with flexibility and features trumping all other considerations.

It's possible to calculate percentage protocol overhead or measure reduced latency to “prove” that bypassing the hypervisor is more efficient. Similar arguments are made by proponents of Fibre Channel, ATA over Ethernet (AoE) and InfiniBand. But real-world testing doesn't show a tangible benefit from this reduced overhead. Because Moore’s Law makes CPU cores, threads and cycles abundant, processing a bit of extra protocol overhead isn’t a challenge for today’s servers. Tests show that VMFS and CSV processing require less than 10% extra CPU time, and enhancements in the latest hypervisors have further streamlined processing.

Another consideration is the impact of unconventional architecture choices on operations and management. A virtualization or storage administrator might be able to make guest iSCSI work in his environment, but will other environments be able to support it? It's also important to consider the future. Staff changes are a reality in IT, and a thoroughly conventional system is much easier to hand off to others.

But the main reason for limited uptake of RDM, VMDirectPath, and guest iSCSI or NAS is the impact it has on hypervisor features. RDM complicates the configuration of vMotion and Dynamic Resource Scheduling (DRS) and takes Storage vMotion off the table entirely. VMDirectPath eliminates vMotion and VMware Fault Tolerance (FT), along with snapshots, suspend/resume and device hot-add.

Because it bypasses the hypervisor, guest iSCSI or NAS excludes nearly all hypervisor features and complicates management. Managing iSCSI and NAS clients is common in the physical server world, but virtualization operations and reporting tools rarely include management of this type of storage. VMware vCenter Site Recovery Manager is widely used for coordination of disaster recovery operations, and can't easily be configured for guest iSCSI or NAS.

In addition, you must consider the efficiency and integration of virtual networks as well as protocol overhead when it comes to guest iSCSI and NAS access. The generic virtual network switches found in today’s hypervisors lack most of the reliability and quality-of-service features found in their real-world counterparts. This makes architecting a high-availability iSCSI SAN problematic. Performance of these virtual networks is also a consideration, since much development focuses on storage I/O rather than virtual networks.

Is direct storage access to the guest a compelling alternative?

At this point, bypassing the hypervisor for storage I/O isn't a compelling option. With no meaningful performance benefits and serious impacts on functionality and supportability, virtual server architects would be wise to bypass these options.

It's likely that technologies like IOMMU and VT-d will become common and supportable in the future. These technologies will probably become integrated in future hypervisor releases, and will no longer interfere with advanced features. But direct storage access to the guest is less likely to become a compelling alternative.

BIO: Stephen Foskett is an independent consultant and author specializing in enterprise storage and cloud computing. He is responsible for Gestalt IT, a community of independent IT thought leaders, and organizes their Tech Field Day events. He can be found online at GestaltIT.com, FoskettS.net, and on Twitter at @SFoskett.

This was first published in July 2011

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.