We've reached the point at which more than half of all production workloads are virtualized. One reason for not...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
virtualizing a workload is that it may be extremely I/O-intensive or overly sensitive to latency. These types of workloads have long been considered impossible or impractical to virtualize. However, virtual flash caching can make it easier to virtualize these types of workloads.
Both VMware and Microsoft have their own approaches to flash-based caching. In both cases, flash memory (in the form of solid-state drives) is used to cache read operations. Microsoft's approach also provides write caching.
Windows Storage Spaces automates flash usage
Microsoft's approach to providing flash caching for virtual machines (VMs) is based on Windows Storage Spaces rather than Hyper-V. Of course, Hyper-V is a Windows Server role and is able to fully take advantage of most Windows Storage Spaces features.
In Windows Server 2012 R2, Microsoft introduced native tiered storage with Windows Storage Spaces. The feature allows an administrator to carve up the available physical storage into storage pools. A storage pool can contain solid-state drives (SSDs), hard disk drives (HDDs) or a mixture of the two. In most cases, the Windows OS can automatically differentiate between the two storage types. In situations in which SSDs aren't recognized as such, it's possible to use PowerShell to manually differentiate between SSDs and HDDs.
The primary job of a Windows storage pool is to provide raw storage capacity for use by one or more virtual hard disks. Virtual disks can be created through Server Manager and can be treated as local storage or as storage for a Hyper-V VM. Many real-world deployments use a nested approach in which a virtual disk is built on top of a storage pool. That virtual disk is treated as local storage for the host OS, and the virtual hard disks used by Hyper-V VMs reside within that virtual disk.
If a virtual disk is created on top of a storage pool, the Virtual Disk Wizard checks for the presence of SSDs. If SSDs are present, the new Virtual Disk Wizard may display a checkbox the user can select to enable tiered storage. Whether or not this checkbox is displayed depends on the existence of a sufficient number of SSDs to accommodate the storage layout of the virtual disk. For example, a virtual disk that uses a mirrored layout requires two physical disks. If storage tiering is used, then two SSDs will be needed.
When a virtual disk is built using a storage tier, two things happen.
- The Windows OS keeps track of the storage blocks that are read most frequently. These storage blocks (which Microsoft refers to as hot blocks) are automatically moved to the high-speed storage tier. The idea is that the most frequently accessed data receives the best possible performance. It's also possible to manually pin files to the high-speed tier, so they will always reside on high-speed storage.
- Windows creates a 1 GB write cache (assuming the high-speed tier is a large enough). The write cache is designed to smooth write operations. The OS can write data initially to the high-speed tier, and then move the data to a standard tier during periods of low I/O demand.
As previously mentioned, it's possible to configure a Hyper-V VM to make direct use of a virtual hard disk that was created on top of a Windows storage pool. If Hyper-V virtual hard disks exist inside of a storage pool virtual hard disk, the Hyper-V virtual disks still receive the benefit of the underlying capabilities. It's worth noting that Hyper-V disks will have to share the I/O bandwidth and high-speed cache of the Windows disks if multiple Hyper-V virtual hard disks reside within a single Windows virtual hard disk.
VSphere creates dedicated Flash Read Cache
Microsoft isn't the only hypervisor vendor to use flash-based caching. VMware allows high-speed caching through its vSphere Flash Read Cache feature.
VMware's approach has a couple of things in common with that of Microsoft's. VSphere Flash Read Cache is intended to reduce latency by making strategic use of flash storage. The caching process is also completely transparent, and VMs are oblivious to the cache's existence. Therefore, no cache-related agents are required.
This is where the similarities end. Microsoft's approach involves building virtual hard disks on storage tiers controlled by the Windows Server OS. In contrast, VMware treats flash storage as a provisionable resource. VSphere allows for the creation of CPU pools and memory pools. Flash-based caching is based on the creation of a logical object called the Virtual Flash Resource.
The Virtual Flash Resource is nothing more than a logical grouping of flash storage capacity (essentially, a pool of SSDs). Even so, there are some important things to know about it:
- Flash storage must be dedicated to the cache. SSDs can't be shared by a SAN or NAS and the Virtual Flash Resource -- they must belong to one or the other. Similarly, you can't place a VMware data store on Virtual Flash Read Cache storage.
- It's a host-level object. In other words, the cache is used by resources on a specific host server. The cache isn't a cluster-level object, and the contents aren't replicated among cluster nodes. However, vMotion is flash-cache-aware. Administrators can choose to include cache contents when using vMotion to move a VM to another host, or the cache contents can be abandoned. If the cache contents are copied, the destination host must have its own Virtual Flash Read Cache.
- There's a penalty for using the Virtual Flash Read Cache in conjunction with vMotion. If the cache contents are included in the vMotion operation, it will take longer to move the VM than it would have if the cache did not exist or if the cache contents hadn't been migrated. Unfortunately, it's difficult to estimate the amount of extra time the cache contents add to the vMotion process since the duration is based on such variables as cache size and the amount of available network/storage bandwidth.
- There's a penalty for not including the cache contents in vMotion operations. If the cache contents aren't included in a move, the vMotion process will take the same amount of time to complete as it would if caching wasn't used. The performance penalty comes into play after the vMotion completes because the VM will no longer have storage blocks stored on an SSD cache. The performance of the VM will eventually recover, but the cache will have to be rebuilt first. This process is similar to when a flash-based cache is first added to a VM and vSphere has to learn which data should be cached.
In case you're wondering, most Hyper-V deployments are based on the use of shared storage. If all the Hyper-V hosts use the same physical tiered storage, a live migration shouldn't impact the contents of the cache.
Although the VMware vSphere Flash Read Cache is designed to boost VM performance, the VMs don't access the cache directly. Instead, a component known as the vSphere Flash Read Cache Infrastructure acts as a broker that controls flash cache usage. The vSphere Flash Read Cache Infrastructure also enforces administrative policies related to the cache.
While the primary job of the vSphere Flash Read Cache Infrastructure is to broker VM cache access, it also allows the hypervisor to utilize the cache through the Virtual Flash Host Swap Cache feature, which replaces vSphere 5.0's Swap to SSD.
Whether you're working in a Microsoft or a VMware environment, flash-based caching has the potential to greatly improve VM performance. The key to receiving the greatest benefit from this caching is to understand how your hypervisor uses the cache, then to add flash storage in a way that adheres with the established best practices for your hypervisor.
About the author:
Brien Posey is a Microsoft MVP with two decades of IT experience. Previously, Brien was CIO for a national chain of hospitals and health care facilities.
Methods to optimize virtual machine storage
VMware and Hyper-V add new storage capabilities