Looking for something else?
Using virtual desktop infrastructure (VDI) is often seen as a way to improve desktop management and endpoint security, but it places big demands on data center storage, both in terms of capacity and performance. One common approach to reducing capacity needs is to share one disk image among users, which Brian Madden, VDI expert and founder of BrianMadden.com, says is the wrong approach. He also reveals that implementing solid-state drives (SSDs) isn't the sole solution to storage performance problems with VDI. In this podcast, Brian lays out the storage software capabilities that are needed and how SSD should be used to support a successful VDI deployment. Read the transcript below or listen to the podcast.
We know that in a VDI environment, IT has to plan for things like boot storms and login storms. And to combat those, a lot of experts recommend SSD as a solution for the onslaught of I/O requests. But you’ve said before that SSD doesn’t really solve performance problems in a VDI environment. Why is that?
Brian: So I have said that SSD doesn't really solve the performance problem. … You talk about boot storms and login storms. If that's your biggest issue and you've got magnetic storage and you wanted something faster, going to something like SSD is probably going to do it. The real storage problem in the world of VDI is not something that SSD solves. … Think of computers before VDI. So you've got your desktop, your laptop -- everyone with their laptop is different -- you have your own stuff, you have your own software. … We all own our own computers and whatever is installed on them. And when a lot of people go to VDI they say, "When you go to VDI, instead of every user having a separate disk image, let's have one disk image all the users share."
If you have a disk image all the users share, that takes less capacity for storage, which is nice because VDI is on servers, and server storage is way too expensive. So if [everyone is] sharing one image, it's way easier to manage because you only have to patch one image. It's less storage, and you also can have a [much] higher performance because [there is] one image that everyone is sharing -- it's very easy to put that in super-fast storage like SSD, or DRAM, or cache, or something like that. It's only one image so it's kind of small, and the super-fast storage is usually very expensive, but if it's only one image, then you can afford it.
So people that say SSD is a solution … because you can put the one image in SSD and everyone can share it. The problem is, a lot of VDI projects actually fail because … [in] the world before VDI, everyone had their own computer with their own complete, separate programs and everything was different. So if the only way that you're banking on VDI being successful in your environment is that all your users are going to share the same disk image, that's not really possible because [in] your environment before VDI everyone had their own image, and [in] your environment post-VDI you're banking on everyone sharing an image; that isn't going to work. All your users are magically all of sudden using the same software? It's crazy. So a lot of VDI projects have failed because they tried to make all the users share an image, which doesn't really work. We don't share laptops, so it doesn't work in the VDI world.
So my point is, in order to make that happen, where every user has their own image, you need a lot of storage and a lot of traditional storage. If it's huge massive, magnetic SANs, people are saying, "Well that's going to be huge and expensive, and how do we make that fast enough?" And then some people say, "We can buy SSD for all the storage," but if you have all the disk images for all these users, SSD is super expensive because SSD doesn't actually change the behavior of storage, it just makes your existing storage really fast. So technically SSD could fix the problem, but it would be so expensive.
… The way to fix the storage problem with VDI is not just with SSD; it's actually looking at some of these new storage solutions that offer things like single-instance, block-level storage. So what I mean is you can have all these users who have all these different disk images, so it looks like every user has their own image, but really the storage is consolidating – kind of like deduplication a little bit, but not deduplication [as in] finding files at nighttime; I mean actually multiple users accessing their own disk images [while] the storage is block-level, single instance, deduplication in the background. So that's sort of a fundamental change to the way storage works. And these new storage solutions might involve using SSD technologies, but my point is it's not as simple as just saying, "I have this SAN with all magnetic disks, and it’s no good for VDI, so I'm just going to replace it with SSD." That doesn't fix anything because that doesn't fundamentally change the way storage works, so … it's not just SSD -- you have to put a solution in place that is actually looking at all of these hard drive images from all these users, and finding common blocks among all these users and pulling those out and consolidating those to cache or SSD. So you have to change the way storage works for VDI to really get the benefit.
If SSD is being used in a VDI environment, where would be the best place to implement it?
Brian: At the end of the day, SSD is way faster than magnetic storage. So when you're talking about primary storage that works when the power is off, you're talking about old-fashioned disk drives or SSD, right? There are a lot of caching solutions, like DRAM, and those are like performance boosters while your system is running. So if you have storage in your environment, and you have bottlenecks on that storage, moving from magnetic disks to SSD can help alleviate the bottlenecks, but you want to be sure you do it smartly. Because like I said, if you have a cabinet of 50 magnetic disks that's powering your whole VDI and it's all slow, you can't replace those magnetic disks with 50 SSDs. Well you can, but it would be really, really, really expensive. It would be better to instead of taking all 50 magnetic disks and replacing them, maybe add in only a few SSDs and also get some smarter, intelligent SAN software that can do the block-level sharing I'm talking about. It can manage what is put on SSD and what is put on magnetic. So, I do like adding some SSD in to make your stuff faster, but again, it's not just a hardware swap; you also need to upgrade the storage software to get the single-instance, block-level storage.
What are the main problems for shops that have already switched to SSD and are trying to improve performance?
Brian: The main shops that have switched over to SSD, unless they … are just sitting on piles and piles of money, they probably did not switch out all their hard drives for SSD because it's so expensive. So usually the people who have already brought some SSD in are only using a little bit of SSD here and there, which again is what I like. But [the problem arises] if they're bringing in a little SSD [and] using that disk image sharing I'm talking about. Because some people say, "We're going to have all our users share a disk image, and we'll put that shared image in SSD. That way, we only have to buy a little bit of SSD for that one shared image." But in my case, I don't think VDI works with shared images. … So a lot of companies are finding that SSD has failed them because in order for SSD to work, they had to have these shared disk images. And I say, "No, you have to go off the shared disk images," and they say, "Well we can't do that because we can't afford the SSD." So that's where I feel like people have to use SSD successfully to accelerate the shared images, but they really need to look at upgrading the software so that they can have these VDI instances where every user has their own image instead of all users sharing. The biggest problem is in order to make it work, they needed to use disk image sharing, and their environment is failing not because of SSD; their environment is failing because disk image sharing doesn't work in their company. And so I say we have to move beyond disk image sharing and move to all the users having their own, but to do that you can't use your current SSD architecture. You have to use your current SSD plus some smarter SAN software that can actually make this happen.
What's the most important piece of advice you have for IT shops that are considering SSD for VDI?
Brian: Make sure your solution works with disk images where every user can have his or her own disk image, and once you do,[that] dictates what you need out of your storage system in terms of capacity, in terms of I/O performance, that sort of thing. So I like to [advise that] as you design this, plan for every user having their own disk image and then see how you design your storage around that.
The other piece of advice I give people is a lot of these people who are selling VDI give guidance. … Take IOPS for example. … [People ask,] "If I have 100 desktops, how many IOPS do I need? And the people selling VDI usually make the IOPS number really low because if the people selling VDI come at you with a really high IOPS number, then [you think] "It's too expensive, I'll never do it." So they say you don't need that many IOPS -- it's a low number that you need. And a lot of people fail with VDI because they make assumptions like they only need seven IOPS per desktop for VDI, whereas in [actuality], magnetic drives have 50 IOPS built in. Even your laptop today without SSD has [about] 50, and yet, a lot of users like to put SSD in their laptop to make it faster. So I say, if 50 IOPS on your laptop is not good enough, and you're going to [use] SSD on top of that for your laptop, then why do you think seven IOPS makes sense in a data center? You probably need 50 or more in a data center as well. So that's the other piece of advice I would [give]. Look at your actual laptop before you go to VDI and look at how many IOPS you have on that, and then use that as your baseline for the way that you design your storage solution for VDI – because if you design a low number just to keep your cost down, that's great. You built VDI [cheaply], but then no one uses it because the performance is so horrible.