What you will learn in this tip: This hypervisor comparison provides an overview of Red Hat, VMware and Microsoft...
virtualization platforms and the approach each of these hypervisor vendors takes to support physical storage.
Each of these hypervisors offers an attractive set of features, but a feature-by-feature comparison may not always be the best starting point when selecting one. Prudent administrators should first consider the types of storage supported by the various hypervisors, as well as any available storage features. This article is meant to serve as a brief overview of the various approaches that some of the top hypervisor vendors take when it comes to storage.
Red Hat KVM: Take note of migration factor
With any hypervisor, there are two main things that must be considered with regard to storage. First, administrators must consider the types of storage that are directly supported by the hypervisor to accommodate virtual machines (VMs). It's worth noting that just because a storage type is supported for a particular hypervisor, it doesn't necessarily mean the storage type is a good fit for production use. For example, KVM supports the use of DAS for VM storage, but doesn't allow VM migrations to occur when DAS is used.
The second consideration is the types of storage that can be accessed (in a supported manner) from within a VM. This is important because VMs often connect to external LUNs via iSCSI or a virtual Fibre Channel (FC). As such, you need to ensure that the hypervisor supports external storage connectivity for VMs.
Not surprisingly, for an open source platform, Red Hat KVM supports a wide range of options for VM storage. However, VM migrations are usually considered to be a must-have feature by many storage pros and Red Hat's storage options are somewhat limited.
To perform a VM migration, a VM must reside on shared storage. That KVM server must communicate with the shared storage using one of the following protocols:
- FC over Ethernet
- Network File System
- Global File System 2
- SCSI Remote Direct Memory Access Protocol, which is the block export protocol used in InfiniBand and 10 gigabit Ethernet iWARP adapters
Like VMware and Hyper-V, Red Hat's VM migration feature is based around the concept of copying memory pages to a network link from one host server to another. However, Red Hat's VM migration feature doesn't work if memory updates occur at a rate that exceeds the copy rate. In these situations, the migration process will fail. In contrast, VMware offers a feature that can slow the VM until the memory copy process has finished, thereby allowing for a successful migration.
Like vSphere and Hyper-V, KVM allows VMs to connect to remote storage through a variety of protocols. The following devices and protocols are supported for connecting remote storage to KVM guests:
- Local hard disk partitions
- Logical volumes
- Host-level FC or iSCSI connectivity
- File containers residing in a file system on the host
- An NFS file system mounted directly by the guest operating system (OS)
- iSCSI storage initiated by the guest OS
- Cluster File System
Red Hat also offers its own software-defined storage product called Red Hat Storage Server, which is its preferred storage option for KVM environments. It's based on the idea that commodity storage hardware can be virtualized into storage pools and then allocated on an as-needed basis. The general approach is very similar to the one Microsoft uses within the Windows Storage Spaces feature found in Windows Server 2012 and Windows Server 2012 R2.
VMware's profile-driven storage offers advantage
Like KVM and Hyper-V, VMware supports the migration of running VMs from one host server to another. VMware refers to this feature as vMotion; these capabilities aren't included with the core hypervisor. VMware hosts must be specifically licensed for vMotion.
Prior to ESXi 5.1, vMotion required the use of shared storage. Even today, shared storage is still recommended. When shared storage is used, the VMware hosts must be attached to the shared storage so that each host server has access to the storage. VMware recommends that shared storage reside on an FC SAN, but iSCSI and network-attached storage-based shared storage is also supported.
ESXi and later versions can migrate running VMs without the need for shared storage. To do so, VM disks must be in persistent mode or be raw device mappings (RDMs). If you choose to use RDM, the destination host must have access to the RDM LUN unless you convert the raw device mapping to a virtual machine disk file.
VMware and Microsoft both support migrating VMs from one storage location to another. VMware refers to this feature as Storage vMotion, while Microsoft calls it Storage Live Migration. Prior to VMware 5.0, Storage vMotion could only be used on VMs that didn't have snapshots. Version 5.0 and above of VMware supports migrating VMs that contain snapshots. In any case, VM disks must be in persistent mode or support RDM.
One of the unique storage features of vSphere 5.0 and above is Profile-Driven Storage. In VMware environments, VMs are deployed to datastores, which act as abstractions of physical storage. Administrators typically choose a datastore based on its underlying hardware. For instance, a VM running a mission-critical database application would likely be placed on a datastore linked to high-performance storage.
The problem with this is that VMs are anything but static. The Profile-Driven Storage feature allows an administrator to see if a VM's underlying storage is compliant with the storage requirements that were originally specified.
Hyper-V scores points on versatility
Microsoft's Hyper-V made its debut with Windows Server 2008, but lacked many of the features required of a true enterprise-class hypervisor. Hyper-V didn't really mature until Windows Server 2012 and was further improved in Windows Server 2012 R2. I've centered my examination and comparison on the Windows Server 2012 R2 version of Hyper-V.
Hyper-V is considered very versatile when it comes to storage. Microsoft documentation indicates Hyper-V can use:
- Server Message Block (SMB) 3.0 storage
- iSCSI direct disks
- iSCSI VM disks
- FC-connected storage
- Virtual FC storage
- A shared VHDX file residing on a Cluster Shared Volume
Like VMware, Hyper-V supports live migrations with or without the use of shared storage. Even so, Microsoft recommends using shared storage whenever possible. Regardless of whether or not shared storage is used; the VMs must be configured to use either virtual hard disks or FC disks.
The use of physical (pass-through) disks is only supported in very specific circumstances. To live migrate these types of disks, the VM must be running on a failover cluster and the VM configuration file must reside on a Cluster Shared Volume. Furthermore, the physical disk must be configured as a storage disk resource under the control of the cluster.
When it comes to shared storage, Hyper-V supports FC storage, iSCSI storage, and SMB 3.0 storage -- SMB storage wasn't supported until Windows Server 2012.
Hyper-V VMs can connect to an extremely wide variety of storage types. The real limiting factor for VM external storage use, however, tends to be the backup process. Online, host-level backups of Hyper-V make use of the Hyper-V VSS Writer. The VSS Writer natively supports VM backups for VMs that make use of:
- iSCSI storage attached through the host OS (the Hyper-V VSS Writer can't back up iSCSI storage that has been initiated from within the VM)
- FC storage attached through the host OS
- Virtual FC
Conversely, guest-level backups support backing up any storage resource visible to the VM, regardless of its type.
In addition to its live migration capabilities, Hyper-V offers a storage emigration feature similar to VMware's Storage vMotion. Microsoft refers to this capability as Storage Live Migration. Microsoft has also introduced a number of new capabilities surrounding VHDX-based virtual hard disks such as the ability to resize a virtual hard disk while the VM is running.
Although Red Hat, VMware and Microsoft offer some similar capabilities within their hypervisors, there are major differences in the way the various platforms support physical storage. Before investing in any hypervisor, it's a good idea to make sure your intended storage will be supported and to gain a clear understanding of all the product features as they relate to storage.