The benefits of efficient data storage consist of more than just saving money; you can open up more physical space in your data, and also reduce power and cooling. But despite the advantages of the technology, many people haven’t implemented storage efficiency tools and strategies because the process can be complex.
In this podcast interview, George Crump, lead analyst at Storage Switzerland, discusses how to build an efficient data storage environment. We explore the main areas you should consider when it comes to efficient data storage; provide you with information on built-in vs. third-party tools; help you understand the roles of data deduplication and compression; and present the latest developments in storage efficiency tools. Finally, discover the most important steps you should take to improve your storage efficiency. You can read George's answers below or download the MP3.
Download for later:
- Internet Explorer: Right Click > Save Target As
- Firefox: Right Click > Save Link As
SearchStorage.com: Storage efficiency covers a lot of ground. What are the chief areas we should be thinking about when it comes to efficient data storage?
Crump: The big thing is to understand why you’re trying to be more efficient. There’s a lot of ramifications in storage efficiency, so part of it is the immediate thing, which is that people get focused on how much disk is sitting idle or how much storage [they’re] buying that [they’re] not using. But there are also other ramifications like power/cooling as well as the time to provision and manage that storage. So it’s a broader concept than just how many dollars you can save.
SearchStorage.com: According to a recent Storage magazine survey, a lot of disk capacity is still wasted among companies. Why do you think that is?
Crump: I think a lot of that started because disk was cheap, and it was easier and felt faster to throw more storage at the problem. We’re now reaching a point where disk is continuing to get less expensive, but it’s not at the rate it used to be. Plus we’re running into the challenge where we’re running out of data center floor space, running out of electrical [units and so on], and unless people start building nuclear power plants in their data centers, we have to do something. Part of it was just the ease of adding storage, and now it’s the complexity of becoming efficient.
SearchStorage.com: Let’s talk about built-in vs. third-party tools. How do you decide if you need a third-party tool?
Crump: From an analysis and discovery perspective, that’s what we look at for built-in vs. third-party tools. In general you’ll probably receive something from your storage manufacturer or whoever you bought your storage from -- and there’s usually a default tool that came with the product. Those are fine because they’re inexpensive or free, and they certainly do an effective job of managing that storage system. But their challenge is that they’re generally isolated to just that storage system, where in most worlds you have more than one storage system, switches, virtual machines . . . all these different things that impact storage. So that’s the advantage of the third-party tool. Typically, it provides you with a much broader view of the environment. And we’ve even seen manufacturers purchase a broader tool for that purpose.
SearchStorage.com: What about solutions that allow you to put more data in the same space like deduplication and compression?
Crump: We look at the tools question as “How do I discover what I have?” But there are appliances that allow you to store more data in the same or maybe even less space. There are two different types today: compression appliances and deduplication appliances/techniques. And [how] to decide between those depends on your use case. Are you using NFS or a SAN? What type of data are you storing? One of the big things I think people forget a lot with deduplication is that dedupe by its nature only works well if there’s redundant data. So if you have an environment that doesn’t have a lot of redundant data, then it’s not going to work for you very well. But for example, in the VMware use case and, to a lesser extent, home directories, you can get very, very good results with deduplication and compression.
SearchStorage.com: What are the latest developments in built-in storage efficiency tools and techniques?
Crump: I think the biggest development we’re seeing is the integration in the VMware vCenter console, and a better exposure to the virtual machines. So instead of reporting on what the host is doing, more tools are able to report on what the specific virtual machine is doing. And in a capacity or efficiency standpoint that’s essential because now you can begin to look at how much capacity particular virtual machines have assigned to them and how much they’re using.
SearchStorage.com: When talking about storage efficiency, what are the most important metrics to look at: data footprints, CPUs, what?
Crump: I would say the first thing is to understand what kind of data you have, and the tool we discussed earlier is really essential in giving you that. And then once you have that inventory in hand, then knowing what's going to be the most effective from a capacity utilization standpoint for you. From there, find out what’s hurting you the most. If you’re running out of data center floor space, finding out how to shrink that becomes very important. If you’re running out of power or cooling, being able to shrink that becomes important.
SearchStorage.com: Can you list the two most important steps people could do right away to improve their storage efficiency?
Crump: I think I’ve kind of hinted at the first one. I’m a big believer of you don’t know what to do if you don’t know what you have. So getting that initial assessment of what you have and your assets is critical. Then once you have that, the next step is to quickly implement one of the more basic optimization products like deduplication and compression so you can take care of the easy stuff before you move onto some of the more challenging storage efficiency projects.