The popular VMware and Hyper-V hypervisors have added important new storage capabilities that provide greater configuration...
flexibility and deliver a performance boost.
Over the last few years, VMware and Microsoft, the leading hypervisor vendors, have been innovating at a breakneck pace. Although the countless new product features have been largely beneficial to virtualization administrators, there are numerous storage-related features in the latest versions of VMware and Hyper-V that could have an impact on your storage configuration, provisioning and management.
VMware storage-related feature improvements
Fibre Channel improvements: One of the most welcome improvements in VMware vSphere 5.5 is its improved Fibre Channel (FC) support. VMware has offered support for 16 Gbps FC host bus adapters (HBAs) since the release of version 5.0. And while those devices were technically -- and nominally -- supported, VMware used a throttling mechanism to prevent them from exceeding 8 Gbps.
Version 5.1 allowed 16 Gbps HBAs to operate at their rated 16 Gbps speeds, but there was no support for full, end-to-end 16 Gbps connectivity. The only way to achieve full 16 Gbps speeds was to create multiple 8 Gbps connections between the switch and the array. This limitation was removed in vSphere 5.5 and full, end–to-end connectivity is finally supported at 16 Gbps speeds.
Larger virtual machine disk files: Another big change VMware unveiled in vSphere 5.5 was an increase in the maximum supported size of virtual machine disk (VMDK) files. The previous limit was 2 TB (less 512 bytes). The new VMDK limit is a whopping 62 TB.
A VMware vSAN can be implemented on a VMware host cluster. VSAN can aggregate the direct-attached storage in the various nodes within the cluster and then treat that storage as a shared SAN resource.
One important thing to understand about vSAN storage is that unlike a traditional, physical SAN, a VMware vSAN isn't multipurpose. For instance, a physical SAN can accommodate any type of data. In contrast, a vSAN can only store virtual machines (VMs).
Each ESXi server that participates in a vSAN must have at least one SATA or SAS hard disk drive that can be dedicated to SAN storage. Additionally, each vSAN participant must be equipped with at least one solid-state drive (SSD) that can be used as a read/write cache. The SSD is used solely for caching purposes and its capacity isn't included in the SAN storage that ESXi makes available to the VMs in the cluster.
The vSphere Flash Infrastructure layer: Another storage-related feature introduced in vSphere 5.5 is the vSphere Flash Infrastructure layer. This feature allows administrators to aggregate SSDs and other flash storage devices such as PCI Express-based solid-state storage into a pool of flash resources. The nice thing about the way VMware implemented this feature is that you can choose which devices you wish to include in the Flash Read Cache; vSphere doesn't automatically commandeer all of your server's flash storage.
The vSphere Flash Infrastructure layer primarily serves as the basis for the vSphere Flash Read Cache (vFRC). Portions of this cache are allocated on a per-VM basis and help to improve performance through caching.
Despite its name, the vFRC also improves write operations by using write-through caching. With write-through caching, data is written to the cache and underlying storage system simultaneously. That way, the data is readily available for use within the cache and written to permanent storage so that the cached data doesn't have to be transferred later on. This method has the added benefit of helping to prevent data loss in the event of a power failure or system crash by ensuring that data persists outside the cache.
In addition, the vSphere Flash Infrastructure layer can be used to store the host server's swap file. VSphere is designed so that the vast majority of a physical server's memory can be allocated to the VMs it's hosting. The hypervisor uses a swap file in an effort to prevent it from consuming physical RAM that could be better used by VMs. In vSphere 5.5, the host swap file can be stored on the vSphere Flash Infrastructure layer, which should result in improved host performance.
Hyper-V adds tiered storage
Although not technically a Hyper-V feature, one of the most beneficial storage features in Windows Server 2012 R2 is tiered storage. Windows Server is now able to differentiate between solid-state drive (SSD) and hard disk drive storage, thereby allowing administrators to create a high-speed tier made up of SSD storage and a standard tier comprising commodity storage. Windows automatically pins frequently read data to the high-speed tier as a way of delivering optimal performance. If the high-speed tier is of a sufficient size, then Windows also uses it as a write cache.
In some ways, the Windows tiered storage feature is similar to VMware's Virtual SAN (vSAN) because both features are designed to provide SAN-like capabilities for commodity storage. But there's one very important difference between Windows storage tiers and the VMware vSAN: Windows storage tiers are implemented on a per-physical server basis. VMware's vSAN is a cluster feature. Even though it uses direct-attached storage on a per-server basis, the storage is aggregated so that a single vSAN contains storage from multiple host servers. In contrast, Windows storage pools contain only local resources.
New Hyper-V storage capabilities
Although VMware has long been the enterprise-class hypervisor of choice, Microsoft Hyper-V has been vastly improved over its last two versions and is now more or less on par with VMware, making it a viable option for use in enterprise environments. Just as VMware has introduced a number of storage-related features in vSphere 5.5, Microsoft has given storage considerable attention in Windows Server 2012 R2 Hyper-V.
Hyper-V Storage Quality of Service: One of the most important new features in Hyper-V is Storage Quality of Service (Storage QoS). QoS has long referred to a technology that allows network bandwidth to be reserved or throttled according to the needs of specific applications, but Microsoft has applied the concept to storage I/O.
Often, multiple VMs -- or multiple virtual hard disks (VHDs) within a single VM -- will share a single physical storage device. In that scenario, the various VHDs compete with one another for IOPS. This has led Hyper-V administrators to either group VMs by their IOPS profiles or to adopt complex storage infrastructures with numerous dedicated LUNs.
Storage QoS allows IOPS to be reserved or throttled on a per-VHD basis. Storage QoS can be used to ensure that a VHD has a certain amount of IOPS available to it at all times or that a VHD never consumes an excessive amount of IOPS (or both). Best of all, this feature can be enabled on a per-VHD basis, so it's possible to enable Storage QoS for one VHD, while leaving another disk unrestrained, even within a single VM.
VHD sharing: Another new feature that Microsoft introduced in the latest version of Hyper-V is VHD sharing. Prior to the release of Windows Server 2012 R2, guest clusters that required shared storage typically had to be connected to a physical LUN. In the latest release, Microsoft has made it possible for multiple VMs to establish iSCSI connections to a common VHD, thereby allowing that VHD to be used as shared storage for a guest cluster.
To use this feature, the shared VHD must be hosted in a location where it's accessible to the various VMs. Typically, this means hosting the VHD file on SMB 3.0 storage (on a NAS device or scale-out file server) or on a Cluster Shared Volume.
Unmanaged storage failure detection: The previous version of Hyper-V allowed administrators (for the first time) to store VM components on an SMB 3.0-based file share. One of the problems with doing so, however, was that SMB shares aren't managed by the Windows Server Failover Clustering service, and therefore failures of physical disks couldn't be readily detected. In the latest Hyper-V release, Hyper-V is able to detect SMB 3.0 disk failures. When possible, Hyper-V will respond to the failure by moving the VM to another node within the cluster.
Generation 2 VMs: One of the biggest changes Microsoft has made to Hyper-V is the introduction of Generation 2 VMs. Generation 2 VMs offer better performance than Generation 1 VMs, but they're only supported by VMs that run Windows Server 2012 R2 or Windows 8.1.
Although Generation 2 VMs aren't a storage feature per se, they do have some implications for storage administrators because the support for physical DVDs and virtual IDE controllers has been removed.
Generation 2 VMs can be hosted on any physical storage supported by Hyper-V, but all VHDs are treated as SCSI. Incidentally, Microsoft has made it possible to boot from a SCSI VHD, which wasn't supported in previous versions of Hyper-V. Generation 2 VMs support the Preboot Execution Environment (PXE) so they can boot from a standard network adapter.
Hypervisors improve storage, gain performance
VMware and Microsoft have introduced a number of storage-related features in their most recent versions. Although none of these features is likely to force storage administrators to completely rethink the way they do things, storage admins can deliver better performance for VMs by taking advantage of some of these new features.
About the author:
Brien Posey is a Microsoft MVP with two decades of IT experience. Previously, Brien was CIO for a national chain of hospitals and health care facilities.