Implementing virtual desktops is one of those IT projects that looks good on paper but is difficult to execute...
in the real world. To some extent, it is a throwback to the days of dumb terminals where IT had 100% control over the user experience. The problem is that a user's desktop is much more than a dumb terminal. The primary roadblock to user adoption is the performance of the virtual desktop, and most of these virtual desktop infrastructure (VDI) performance issues are directly linked to the storage infrastructure.
VDI storage challenges
One challenge with VDI is meeting user expectations for performance. Most IT planners design their VDI to meet the performance challenges of a hard disk-based laptop. The reality is that users are becoming more accustomed to the instantaneous nature of flash. The modern laptop is flash-based, and users are beginning to expect near immediate response from any device they use, including a virtual desktop.
Another challenge is related to budget. The typical justification for a VDI project is its ability to reduce the operational expenses associated with supporting user desktops and laptops. Reaching a low cost per virtual desktop means stacking as many virtual machines as possible onto as few physical servers as possible and connecting them to a single storage system. There is typically plenty of CPU horsepower in the physical hosts to support a high number of virtual desktops. But high virtual desktop density creates storage I/O contention issues. While the IOPS requirement per virtual desktop is small, directing hundreds of desktops at a single storage system can accumulate into a very large problem.
The third storage challenge is dealing with the capacity demands that result from centralizing hundreds of desktops. Most desktop virtualization products have the ability to clone desktops from a master image to significantly reduce the overall capacity requirement. The problem is that cloning increases the storage I/O requirement. There are two types of desktop "modes" that VDI administrators can select. The most popular is to use persistent desktops. This allows users to personalize their desktop and install unique applications. However, combining persistent desktops with cloning and thin provisioning can lead to a write amplification problem. These data efficiency techniques must occur before each write, data capacity has to be allocated, and a link off of the master image has to be established. In other words, one write operation can lead to three to five I/O operations.
Another option is to use nonpersistent desktops. In this implementation, desktops are created on the fly as users log into the environment, only consuming capacity as the desktop is used. In the past, the drawback to using nonpersistent desktops was their lack of personalization. Today there are several software platforms and profile managers that allow users to customize nonpersistent desktops. There is still a storage challenge, however. The time to create a user desktop, on the fly, is critical to user acceptance. If hundreds of users all log in at about the same time, for example at the beginning of the work day, it can take a while for desktop instances to become available, leading to user frustration.
The final challenge, no matter the desktop mode, is delivering consistent and predictable performance. The storage system has to respond to morning login storms, anti-virus scans, software updates and just the general I/O that hundreds of desktops will create on a busy day. Delivering predictable performance is made more difficult as the solutions to the above problems are implemented. As mentioned, each "solution" has a ripple effect on VDI performance issues.
Solving VDI storage challenges
The good news is that VDI performance issues can be addressed while keeping costs in check. The following five implementations can be used to solve performance problems. At the heart of each of them is flash storage. Hard disks, while cost-effective on a price-per-GB basis, simply can't keep up with the massive random I/O requirement that VDI demands.
- Server-side flash. For current VDI projects relying on hard disk drive-based arrays for storage, a server-side flash product may be a quick fix. These products typically combine server-based caching software with server flash storage. When server-based caching products first came to market, they only cached reads. The read-only nature made them a poor fit for VDI storage acceleration because most of the above storage I/O challenges are write I/O related. Now, however, many server-side caching products can safely cache write I/O by either mirroring the write to another server or a shared flash storage area.
- Hybrid flash arrays. Early hybrid flash arrays used a minimal amount of flash to keep costs down, which made it difficult to determine what performance would be when under load. As flash storage prices have decreased, the amount of flash included with a hybrid system has increased dramatically. Today, it is practical to have a flash tier that represents 25% of overall capacity, making the chances of a cache miss small. Hybrid arrays, of course, also have a hard disk storage tier that can be used to store user data. In this design, the desktop and its applications are stored on and load from flash. The user data is stored on the hard disk tier exclusively. Some hybrid storage systems are multi-protocol (block and file) and can eliminate the need for a separate NAS for user data altogether.
- All-flash arrays. All-flash arrays are the performance sledgehammer. Assuming a well-designed storage network, they overcome all of these VDI performance issues. The biggest challenge with an all-flash array is, of course, cost. This design means replacing the least expensive storage, a laptop utilizing consumer flash, with the most expensive shared enterprise flash-based array. Despite the initial cost disadvantage, these systems can support a significantly higher number of virtual desktops while delivering performance that will make users prefer their virtual desktop instead of tolerating it. While it is unlikely that an organization would select an all-flash array specifically for VDI, an all-flash array, of course, can support a variety of workloads. As such, some shops may use an all-flash array to support VDI alongside additional applications.
- Storage system data efficiency. Most all-flash arrays and some hybrid arrays have a key advantage: built-in data efficiency. Techniques like thin provisioning, snapshots, cloning, deduplication and compression allow the storage system to minimize capacity consumption, without impacting virtual desktop or physical host performance. Allowing the storage system, with its dedicated storage processors, to perform these functions instead of the capabilities built into the VDI software saves host processing power for supporting a large number of desktops.
- Hyper-converged architectures. Hyper-converged architectures (HCA) have proven to be very popular for VDI projects, which are often greenfield, meaning they will need new servers and storage at the same time, which HCAs provide. HCAs also provide a safe and cost effective way to use server-based flash. HCAs aggregate flash resources across all the servers to create a global, parity protected pool of storage. Like all-flash arrays, they provide data efficiency to further reduce storage costs. They also scale in a manner that is complementary to the way VDI scales. Each additional desktop is going to require more compute to drive those desktops, which will then need more storage capacity and performance. Each addition of an HCA node provides all of these resources; compute, capacity and I/O performance.
The challenge with HCAs is maintaining predictable performance, especially under load. The HCA expects the compute tier to run everything; virtual machines, networking, storage software and data efficiency, a spike in one area may cause other functions to suffer. Many HCAs offer enough processing power that even at an extreme load the user will not experience VDI performance issues. However, it is something that you need to be aware of and be prepared to purchase additional compute resources to stay ahead of the problem.
VDI is potentially the most challenging environment for the storage infrastructure to provide consistent, predictable performance while still being cost-effective. In the past, it seemed the only option was to leverage server-based flash and ignore shared storage. However, as you can see, thanks to the decreasing cost of high-performance flash, shared storage is more suited to address these VDI performance issues.
The key is to use the technology to create density. The more desktop instances you can load on each host results in better cost per desktop. An aggressive price per desktop allows IT professionals to meet the financial expectations of the project and gain operational efficiencies while meeting the most critical requirement, user acceptance.
SSDs can help with performance-related VDI issues
VDI performance problems need a soft(ware) touch
How you can solve VDI performance challenges
- A guide to storage for desktop virtualisation –ComputerWeekly.com