Benefits of software-defined storage architecture: Time for consensus
A comprehensive collection of articles, videos and more, hand-picked by our editors
Is software-defined storage (SDS) a step forward for IT? That depends. In theory, SDS management is simplified by separating the storage programming from the physical hardware. For many, the goal of SDS is to provide flexibility by eliminating the constraints of a physical system.
But does it work?
In his Storage Decisions keynote, Jon Toigo discussed SDS technology and its potential for today’s IT managers. He also revealed some SDS management snags. "First of all, we know that software-defined technology is inherently supposed to deliver something called atomic units of technology," said Toigo, explaining that this should simplify SDS management and agilely roll out the necessary storage.
However, while it's meant to be agile and resilient, there are some hidden snags to a SDS infrastructure. "We're all told that we're supposed to simplify the way that we provision resources to business applications," said Toigo. "Can we quickly allocate resource to a particular line of business? Does [SDS] make it possible for us to do that?"
Toigo pointed out that customers who switch hypervisors, which can used in SDS architectures to manage multiple operating systems on shared hardware, sometimes leave some applications on VMware, for example, and then move some to another system.
Jon ToigoToigo Partners International
"So now, we're dealing not just with one software stack and its storage; we're dealing with multiple hypervisor software stacks and their storage," said Toigo.
Between multiple hypervisors and mission-critical applications that organizations won't risk virtualizing, SDS management can easily become more complicated. "Now your world just got more complex," said Toigo. "You now have three management problems, not one."
Transcript - SDS management pros and cons
Editor's note: The following transcript has been edited for clarity and length.
First of all, we know that software-defined technology is inherently supposed to deliver something called atomic units of technology. So, it will be very simple for us when management comes to us and says, "We need 100 more seats of an ERP. We know what's required from a processor standpoint, how much network bandwidth is required, and how much storage capacity is required -- based on [the] profile of that application -- to support that particular requirement.
We should be able to roll that out at light speed. Sounds good, right? Even [a] virtualization administrator can do it, [one] who's never heard of tape. Actually getting there is a little more complex than that, and this is what your vendor never tells you.
Basically, on the left hand side, is a study done at the end of the year by Veeam. It was commissioned [by] Veem, and it was basically a hypervisor preferences [study]. Interestingly, of the 587 companies or so that were interviewed, 38% said they were planning on switching hypervisors this year. [When] they asked them why, they said, "Because the license costs on the hypervisor that we're using are going straight through the roof, and we're not paying that kind of money for a … hypervisor."
I said, "Does that mean your infrastructure is going to change from hypervisor A, probably VMware, which is the big fish in a small pond of hypervisors? Or is it going to be B, Hyper-V or whatever?" Most of [the respondents] were switching off of VMware. But they weren't doing so completely. They need some applications on VMware, and some applications found in Hyper-V, KVM or whatever they're going to change to.
So now, we're dealing not just with one software stack and its storage, [but] with multiple hypervisor software stacks and their storage. Plus, if you look at these numbers from IDC and Gartner -- which is the little pie chart on the side -- you have 29% of your server infrastructure that's supporting between 69% and 75% of your x86 workload, which is now virtualized. That's what [respondents are] anticipating for 2016 -- that's what the world will look like. Twenty-nine percent of your servers are going to be supporting that 75% of all your software that's gone into virtual machines.
What's on the other 79% of your server infrastructure? It's the 25% of your applications that you wouldn't dare virtualize because you're doing just fine: high-performance transaction processing systems, big databases that operate at the best speed you can possibly get out of them -- they would be nothing but impaired if you put them in virtualization. So you keep them raw, you run them on bare metal.
Now, you have at least three infrastructures to support: a Hyper-V infrastructure, KVM, Citrix, or whatever; the VMware, which is the big fish in the small pond; and the physical. Now your world just got more complex. It didn't make it easier for you to deploy these resources, it made it more difficult. And it didn't make it less costly. You now have three management problems, not one.
Where did this idea come from that this stuff is going to save money or make your world more efficient? I don't know. If you look at this chart on the bottom, these are the typical steps required to provision storage. The request comes in, and we're going to need some storage for this new application. We're rolling out on day one; by day 15, if you're lucky -- and you have the resources and the inventory -- you will finally have provisioned the storage. It's a two-week process.
Ideally, when you get to some sort of atomic units of storage, you're supposed to be able to cut that down. Your application environment should be an atomic unit that you can just readily provision. Are we there yet? That's what my kid says whenever we're driving to Disney World: "Are we there yet?" And that's what your management is going to start sounding like, a bunch of 8-year olds. [So are we there yet?] No, we're not.