How flash can pump up performance of virtual desktops

Virtual desktop environments can strain conventional hard drive storage systems, but strategically placed solid-state storage devices can boost performance.

This article can also be found in the Premium Editorial Download: Storage magazine: Solid-state adds VROOM to virtual desktops:

Virtual desktop environments can strain conventional hard drive storage systems, but strategically placed solid-state storage devices can boost performance.

Virtual desktop infrastructure promises numerous benefits to beleaguered IT operations. Organizations that must manage thousands, tens of thousands or even hundreds of thousands of endpoint devices are faced with the daunting tasks of security, software version control and content delivery. Virtual desktop infrastructure (VDI) can help centralize operating system (OS) image management, which reduces the number of OS versions to support, simplifies the rollout of new application software versions and facilitates lifecycle management. It also virtualizes the application environment from the physical endpoint devices, allowing IT to support everything from smartphones to desktop PCs without requiring individual device testing and qualification.

Users similarly benefit from instant access to corporate applications from any location that has cellular or Wi-Fi connectivity. Moreover, data can be protected by corporate backup policies, making it far less likely that data will be lost or stolen from portable devices or crashed local hard drives.

@pb

VDI is tough on storage

Despite these benefits, VDI is no panacea for either IT managers or users. VDI places additional stress on the local infrastructure. VDI “boot storms” are well known for the potential to bring storage subsystems to their knees with extremely high I/O demands during peak boot-up/logout periods such as first thing in the morning, at lunch time and at the close of the day. Companies must size their storage throughput to these peak periods, resulting in a high spindle-to-data ratio and therefore high dollar-per-GB storage costs.

Solid-state drives (SSDs) are a recognized solution to the boot storm problem. Boot-up operations are almost entirely read I/O activities and SSDs provide I/O performance substantially greater than that of hard disk drives (HDDs), especially on a per-GB basis. Although SSDs are much more expensive per GB, it’s not necessary to over-buy capacity to get the required aggregate I/O throughput as may be the case with HDD-based storage systems.

In the boot storm scenario, it makes sense to place the bootable images on a logical device and isolate access to that “drive.” It doesn’t make sense to use local SSD cache on the server because the image can be moved or hog the cache to the exclusion of other data.

Beyond boot storms

Beyond boot storms, SSD would appear to be very attractive to other applications in virtual environments. After all, since user I/O is centralized, front-end SSDs could handle high-demand activity and take the strain off the back-end HDD infrastructure. This would allow organizations to use high-capacity, low-cost HDDs to support their virtual desktops without sacrificing performance.

Unfortunately, a generalized SSD solution just doesn’t work in this scenario. SSD delivers optimum performance, whether implemented as cache or a separate storage tier, for recursive reads. It doesn’t work for random I/O workloads or for write-intensive operations. In a VDI implementation, the multitude of systems requesting user data appears to the disk system to be highly random. Data access for users is individualized, so the data requested by one user is unlikely to be requested by the next and therefore can’t take advantage of SSD read speeds.

With randomized data access, data will be continually swapped in and out of SSD. This creates two problems at the hardware level. First, SSD write performance is notoriously inefficient. SSD providers compensate through various write buffering and rotational write schemes, but these merely delay the inevitable. More insidiously, SSD cells can wear out in as few as 3,000 write operations, although enterprise-class SSDs can last as long as 100,000 write operations. Even so, with thousands or tens of thousands of users accessing the system, it won’t take long to start burning out cells. As cells wear out, the SSD performance gradually diminishes.

One company that attempts to solve this problem is Virsto Software with its Virsto for VDI “storage hypervisor” that virtualizes the storage serving virtual server implementations; both VMware and Hyper-V are supported. Virsto claims it can double the VDI performance per physical machine and reduce the storage needed for hypervisors by as much as 90%. With thousands of virtual machines (VMs), this could be significant. Moreover, Virsto eliminates the random nature of VM I/O with sequential writes and a logging architecture.

The common thread to these implementations is finding (or creating) environments where data isn’t accessed randomly or at least isn’t repeatedly rewritten. It’s neither cost-effective nor practical in most cases to implement an all-SSD storage environment. Storage managers need to target specific use cases where SSD will yield benefits that outweigh the costs.

@pb

Best apps for VDI + SSD

The best targets for using SSD to improve VDI performance will be collaborative applications. Some examples include reference materials that don’t change over time, yet users may frequently access them. Wikis, reference documents, legal materials and the like could benefit from the fast access of SSD. Similar to OS images, having these materials loaded into logical devices will make better use of the SSD than front-ending the storage array with an SSD tier 0.

Another way to look at targeted use cases will be situations where data is common to many users. Informational databases are an obvious target, because VDI users will be accessing the data repeatedly, with little new data entry or modification. Although this use case doesn’t differ from normal computing in the broadest sense, specific applications will benefit in particular. For example, a sales-force automation application will have supporting file systems, databases or both. The sales force will have a common system image, making it a great fit for VDI. At the same time, the sales force will need access to the same sales support information, which may change infrequently -- also a solid use case for SSD in a VDI environment.

In an extension of the sales-force example above, business-to-business support can similarly benefit from VDI and SSD. Business partners can log into a standardized application environment from almost any device and access common information. Examples might include insurance applications, where numerous independent agents need access to specific portals and information.

Data warehouses or analytical databases are repositories of static information, but may or may not benefit from SSD. The data in data warehouses isn’t usually dynamic and analytics may be limited by the I/O performance more than the processor, so SSD could reduce the I/O bottleneck to significantly improve analytical speed. However, data warehouse analysis may not be time-sensitive in the real-time computing sense, so the cost to load a data warehouse into SSD is justifiable only if real-time analytics are needed.

VDI + SSD + Cloud

VDI fits well into a private cloud strategy because clouds are inherently centralized points of service. A private cloud is also an ideal target for SSD, either as a tier 0 or for pre-positioning data near the data consumer. So, VDI and SSD would seem to find a nexus in private cloud. However, the VDI deployment risk for IT organizations with private clouds is the possibility that VDI will be lumped in with all the other applications. If this becomes the case, then VDI won’t benefit from the tier 0 performance for the reasons described earlier.

Strategic deployments

The same care must be taken in deploying VDI in all environments, whether an isolated platform in the data center or a service from a private cloud. Storage architects should also bear in mind that SSD solves the I/O latency problem, not the network latency problem. VDI deployments might suffer from network latency, but the solution is independent from those described here.

There’s no question that SSD devices can significantly improve service delivery in VDI environments -- assuming judicious deployment. Generalized implementation may not yield the desired results and could actually create a costly bottleneck. Architecting from the frame of reference of an end user, rather than from data center system considerations, will help to target the appropriate location of an SSD device. If the SSD deployment would make sense for a single PC, then it will make sense for VDI.

BIO: Phil Goodwin is a storage consultant and freelance writer.

This was first published in July 2012
This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

1 comment

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close