Managing storage in a virtual server environment comes with challenges in the areas of efficiency, deployment time and complexity. You have to navigate your way through virtual server sprawl, end user expectations and storage networking choices.
In this podcast interview, Andrew Reichman, principal analyst for Forrester Research, talks about the processes and tools to use to overcome these storage challenges.
Download for later:
- Internet Explorer: Right Click > Save Target As
- Firefox: Right Click > Save Link As
SearchVirtualStorage.com: What are some of the challenges with managing storage in a virtualized server environment?
Reichman: I’ll point to three different challenges, and one is efficiency, the second is speed, and the third is lots of choices -- confusion and complexity. [Regarding] the first one, efficiency, we’re seeing a lot of virtual server sprawl; you make it really easy for people to spin up virtual servers, and guess what they do? They ask for lots of them. And if you do your storage environment in the traditional way that you did pre-virtual, you end up burning up lots and lots of capacity. So, keeping the storage environment efficient is a real big challenge, to make sure that you get the benefits you were hoping for, from building the virtual server environment in the first place.
The second one is speed. People’s expectations are super high. We’re in the cloud era. People want to get a virtual server right away, and they really don’t want to wait for you to go buy storage and take six weeks, eight weeks, months, whatever it takes to build out storage the way you used to for big application development projects in the virtual server era. So, making sure that you can turn up those virtual servers very quickly and meet your internal customers’ expectations is really, really important.
The third one is complexity. There are lots of different ways you can build storage for virtual server environments. You can use iSCSI, Fibre Channel, NFS; you can do lots of different network designs and server designs and storage designs. It’s just important to pick a best practice design and stick with it -- make sure you have consistency, you test it out, you understand the business continuity and disaster recovery aspects so that you can keep it safe and reliable. … I think chargeback is a really important thing to make sure that your environment is really consistent and efficient. So, really just having the process to keep things from getting out of control is [really] important.
SearchVirtualStorage.com: What are some storage management best practices to overcome these challenges?
Reichman: I’m going to talk about it again in the same three aspects that I talked about before. From an efficiency perspective, it’s really critical to use both processes that can clean up and make sure you don’t give out too much storage as well as technologies to be efficient. So, from a process perspective, really keeping an eye on what you’re giving out; making sure you have a couple of different gradients -- like a gold, a silver and a bronze -- in terms of performance, reliability and redundancy; making sure you go back after the fact and audit the virtual servers you’ve given out and make sure they’re still in use and have a process to disconnect them and put the storage capacity back in the free pool; keeping it absolutely consistent -- [a] process like that is really going to help with efficiency.
From a technology perspective, tools like thin provisioning; deduplication; [and] wide striping, [which] can allow you to use cheap disk rather than really high-performance storage capacity to satisfy those virtual server requests, [are] going to help you spend less. You know that you’re going to start shifting capacity from disk on board the server to shared storage capacity, which is more expensive and more complicated, so you don’t want to see your costs skyrocket. Thin provisioning is really critical because you’re going to give out lots of gold images -- you don’t want to have a custom storage allocation for every different virtual server. But the reality is that most of those virtual servers aren’t going to consume all the storage capacity you give them. Thin provisioning allows those servers to pull from a common pool and to only use the actual amount of capacity that they’re writing data to, rather than locking up physical storage for each of those.
Another key tool for efficiency is snapshots and clones, where you can quickly copy those and give them out. The snapshots and the clones actually [help] out with the second thing -- being quick, being fast. You can spawn off lots of gold images of a virtual server using a writable snapshot, and that’s a great way to keep down the sprawl and make sure that you can turn up virtual servers very quickly and satisfy your customers’ demands.
Finally, it’s that process and complexity. I really encourage people to think about the whole company and build one consistent virtual server infrastructure on one storage architecture, one network architecture, one server architecture and one version of the virtual server technology that you’re using. Keep it consistent. It can get out of control so quickly if you have lots of different versions and lots of different hardware floating around the environment. Really figure out what the requirements are and keep it simple, keep it consistent.
SearchVirtualStorage.com: How much of a factor are the different storage architectures in creating storage challenges in virtualized server environments?
Reichman: Virtual servers create a lot of new challenges that we didn’t have back in the physical server era. In the physical server era, things were fairly static. You had one server with one HBA that goes on, maybe, two redundant paths to storage -- and that’s it. With virtual servers, things are much more dynamic. You’re turning servers up and tearing them down much more quickly; the laborious process of zoning in a traditional Fibre Channel environment can really take a lot of time and it’s not that well-suited to the dynamic nature of virtual server traffic. Thinking about NFS or iSCSI can make things much easier. Some of the testing results from VMware [show] that you really get the same performance out of Fibre Channel, or iSCSI, or NFS. So that’s one architectural thing I would really encourage people to think about: What network protocol you want to use? Do some testing. Do some evaluation. Talk to some reference customers that are using it in a number of different ways. The reason why people needed Fibre Channel performance in the past might not be that relevant for virtual server environments, and you might be adding complexity, reducing flexibility and adding cost that’s really not justified. So that’s one thing [to] think about: Can you get this done with Ethernet for cheaper and faster?
SearchVirtualStorage.com: How are hypervisor vendors addressing storage management challenges?
I kind of touched on it before with that integration [discussion]. There’s a lot of APIs that allow the hypervisor software to reach into the storage and call features that are built in the storage. This allows you to just simply give space out to the hypervisor and then let the virtual server administrators do self-service, build their own environment. You can build some templates and allow them to just grab the storage that they need, and that speeds things up a lot.
And that can be risky too. You’re giving a lot more freedom and control to non-storage experts, and so you really want to build those templates and those gold images properly, and the tools that they’re calling -- like thin provisioning, the different performance choices -- that they’re well-architected from the start and that they work effectively [and] that you bought the right storage that allows for rapid provisioning and efficient usage. That’s going to matter a lot.
That’s really the approach that a lot of the hypervisor companies are taking -- rather than build it themselves, they’re giving APIs that can call the native storage functionalities. So it’s really incumbent on the storage team and the storage purchase decision makers to make sure that they’re building the right environment that’s going to work well when it interacts with those APIs. But if you do it right, it can really make a great orchestrated environment where everything works together and you’re not having lots of handoffs that slow the process down.