This content is part of the Essential Guide: VDI issues: How to use SSD to improve performance

Running VDI, virtual machines on SSD: Matching form factor to application

Solid-state storage can supply performance boost for virtual machines (VMs) running I/O-intensive apps, virtual desktop infrastructure (VDI) boot storms, random data-access needs.

Using solid-state storage in virtual environments can provide a performance boost for organizations that deploy a large number of virtual machines (VMs) and host servers, run I/O-intensive applications on VMs, require random access to data and install a virtual desktop infrastructure (VDI).

IT shops face a number of decisions when considering solid-state storage, including whether to use PCIe cards in servers, solid-state drives (SSDs) in storage arrays and all-flash arrays.

In the following podcast interview, George Crump, lead analyst and founder of Storage Switzerland, an analyst firm that focuses on storage, virtualization and cloud technology, offers up tips and advice on running virtual desktops and virtual machines on SSD.

Play now:
Download for later:

Tips for using solid-state storage with VMs and VDI

  • Internet Explorer: Right Click > Save Target As
  • Firefox: Right Click > Save Link As Most companies use virtual servers, but not that many use solid-state storage with them. Do you think they should?

Crump: The answer is always it depends. But, in general, in a virtual environment, if you have any number of virtual machines and some physical hosts, the answer is going to typically be yes. The reason is that hard drives have trouble responding to random access, and solid-state happens to be very good at random access because there’s nothing that has to move or rotate. So, the more virtual machines you have and the more hosts you have, the more likely it is that you’re going to benefit, in some cases significantly, from having solid-state storage. Is there some threshold number of virtual machines at which end users should consider using solid-state storage? Or, is the decision related more to server workloads, applications and use cases?

Crump: I wish there was. The truth is it’s really tied more to the workload, application and use case scenario. Clearly, there is a correlation between the number of virtual machines that you have on a physical host and the number of physical hosts. But really what’s going to probably drive it more is the type of applications. For example, if you’ve virtualized a bunch of servers that really don’t have much disk I/O needs, then probably solid-state disk doesn’t make sense. But, if you’ve virtualized databases and even file servers and things like that, then you’d probably get to needing solid-state sooner than the other guy would. There are many types of solid-state storage. There are PCIe cards in servers, solid-state drives in storage arrays, all-flash appliances and even others. Which options are most advantageous with [when running virtual machines on SSD]?

Crump: That’s probably the hardest question to answer because there is an almost overwhelming number of choices nowadays. Probably the best answer is if you have an environment where there’s one particular host or maybe a handful of particular hosts that have a performance problem, something server-side like PCIe or even internal solid-state drive can be beneficial. Another area where server-side SSD could be beneficial is just in managing virtual memory swap space, so you don’t have to buy as much RAM in that particular server. [With] the more shared use case, where we’re putting solid-state drive in storage arrays or even the all-flash type of arrays, those pay benefits in the fact that the use can be amortized across many, many hosts. And so, I think what most enterprises will end up with eventually is a combination of everything, where you’ll have some solid-state storage in the host, and then you’ll have solid-state storage either augmenting hard disk or even maybe only solid-state storage in the shared configuration. In what scenarios do you advise using one or more tiers of solid-state storage, and when is solid-state storage cache a better option with virtual servers?

Crump: Again, I think it’s an “as the environment grows” type of answer. Initially, probably just one solid-state type of strategy will work, whether [it’s] something in a shared storage device or, as I said earlier, in a one-off performance problem on a particular server. But, over time, as you get more and more servers, the concept of putting solid-state storage locally to act as … storage memory, where it’s acting really more as slow, inexpensive RAM instead of fast, expensive storage, and it compensates for having to buy as much RAM in the server combined with some level of solid state on the shared system. Are there any features in the major hypervisors or their host servers that are especially helpful to have with solid-state storage?

Crump: Not directly available from the hypervisor vendors, although they’re clearly making hints that they’re going to be providing that sort of information. A good example of where that might be is we expect applications to be able to start, for lack of a better word, educating the solid-state devices, so … the cache example is a really good one. An application or a hypervisor has better predictive knowledge of what data’s going to be accessed next, and it could start pre-warming the solid-state storage area to do that. There’s also a fair amount of third-party work being done where you can have extensions to hypervisors that will do things like structured log to improve write performance and things like that. Those would be ideal matches for solid-state. We’ve spoken a lot about [running virtual machines on SSD], but what about virtual desktop environments. Does solid-state storage make sense there as well?

Crump: Yeah, and in fact, I would be willing to guess that solid-state in the virtual desktop environment is probably more predominant today than even in the virtual server environment, because there’s one specific cause that drives everybody to SSD, and that’s the morning boot storm or login storm. And getting those first set of files onto solid-state disk can make a significant difference in user performance, and VDI is all about user acceptance. If you deliver an experience that’s as good as or better than what they’re currently experiencing on their local laptop, clearly acceptance can go up. So, absolutely, SSD in VDI environments is becoming very common. With VDI environments, what’s the best type of solid-state storage to use?

Crump: Probably more of a shared infrastructure makes sense, where you’re going to use one of the all-flash storage appliances that we talked about earlier or maybe a tiering approach. The key there is to make sure that that critical boot-storm desktop data is going to be available and already on the SSD when the first logins start to happen. You don’t want to wait for all this stuff to get promoted. It would seem wrong to punish the people that show up early to work with the worst performance. So, you want to make sure all that’s positioned ahead of time, and different technologies will have the ability to either pin that kind of information on the SSD or to pre-promote it.

Dig Deeper on VDI storage