Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

What can software-defined architecture do for me?

Software-defined architecture might seem like it made it to the mainstream only recently, but according to expert Jon Toigo, it's been right under our noses all along.

Despite conflicting definitions from software-defined storage vendors, Toigo says the technology, in simple terms, is the extraction of intelligence normally found on a controller, and placement of that intelligence in the hypervisor stack as a means to evenly distribute the functionality.

"The point is, that's been an objective for a very long time," he said.

That doesn't mean the technology is without benefits -- the first of which is high availability. In a software-defined storage environment, multiple nodes are able to store copies of data to protect against loss in the event of a failure. Software-defined architecture can provide simple management as well, or as Toigo puts it, "one throat to choke." Software-defined systems are most often sold in pre-packaged nodes for easier deployment while providing interfaces for management of all the components. A final benefit is a low up-front cost. Software-defined architectures provide many of the features name-brand, more expensive storage arrays provide, which means administrators can pair it with commodity hardware instead.

But Toigo reminds those interested to take the benefits with a grain of salt. Administrators of environments with more than one hypervisor might find they aren't completely relieved of complexity. And of the low up-front cost, Toigo said: "Acquisition cost is approximately one-fifth -- if you're looking at Gartner numbers -- of what the overall cost of ownership is in storage. Every year the cost of administration management is five or six times the cost of what the acquisition was."

View All Videos

Transcript - What can software-defined architecture do for me?

There are so many definitions of software-defined storage, John. Give us your best one.

John Toigo: Direct-attached storage.

Really?

Toigo: Yes, simply put. The definitions that you hear from the vendors are all over the board and they're always fashioned to support their specific model or their offerings in the product area. Software-defined storage, in theory, is the placement of the intelligence that's normally on an array controller instead in the hypervisor stack on a server when you virtualized a server. There are many ways to do that and we could talk about those if you want, but the point is that that's been an objective for a very long time. The question is, do you put the storage somewhere where it is isolated behind the server where that capacity is only available to the hypervisor, or do you share it out of a common pool somewhere? And unfortunately, what software-defined storage has been closely joined to the hip with is the concept of server-side [storage], which is direct-attached storage. So, to me, the only differentiator between that and what we were going for with the SAN originally -- which was also surfacing all the chewy goodness software off the array controller into some centralized repositories so we could allocate those services a little more intelligently -- is that you're putting the physical infrastructure behind the server in a direct-attached configuration.

Now, tell us on its best day, implemented the right way, what can software-defined storage do for us?

Toigo: Well, if you listen to the advocates and the evangelists talk about it, this is very much in line with the way that we did supercomputing using Linux Beowulf clusters for example, where you stand up lots and lots of nodes of compute with their associated storage and if one of the nodes falls off there's always a high-availability program where another node has a copy of that data. It can pick up the load, or is already sharing the workload, and we just shift all the load over to it. So the high availability story is the primary advantage that goes with this.

Also, theoretically, it should drive down storage costs because now you can go with generic gear instead of buying name-brand gear because all the chewy goodness software is what the name brand guys are usually selling when they sell you their kit, and that accelerates the price. There was a vendor a few years ago [that] came out with a deduplicating disk array and it used $379 disks, $3,000 worth of hardware and the list price for the product was $410,000 because of the value-added software that was on the array controller. So imagine if you could take that software, throw it up in the central location and use it indiscriminately with any disk that you wanted to. That would probably be a more cost-effective use of that technology. And disk is disk. I mean, everybody's selling a box of Seagate hard drives, so where is the differentiator there?

So anyway, in concept the whole idea is great but I want to clarify one thing that was in a premise of your lead-up question: The long-term cost of storage is its management. It's not the hardware. Acquisition cost is approximately one-fifth -- if you're looking at Gartner numbers -- of what the overall cost of ownership is in storage. Every year the cost of administration management is five or six times the cost of what the acquisition was. So I don't really care as much about acquisition. If I'm stretching pennies, I'm going to be concerned about it. But what I'm really concerned about is the actual operating expense of the storage.

And I look at the numbers that are out there and they say by 2016 between 69% and 75% of all workloads will be virtualized. And it will run on 29% of your hardware. What's running on the other 71%? The 25% of your workload that isn't virtualized: high-performance transaction processing systems and databases that are high performance and don't do well under a hypervisor. And these are the revenue generators for your company. So we're taking all the little fry, the small fry, the less directly associated with the revenue of your company kinds of applications, we're virtualizing those on x86 Tinkertoys using hypervisor software and we're going to throw some storage behind those. Except we're probably going to have more than one hypervisor. In most companies these days, they're saying that they're not only just using a VMware, they use a Hyper-V, maybe some KVM, maybe some Citrix.

If each one has its own proprietary stack with its own proprietary hardware, what you've just done is multiply the number of objects you're going to have to manage in your infrastructure. You're going to have that big support requirement for the physical inventory of the high-performance transaction stuff and then you're also going to have all the hypervisor with their dedicated storage. That increases the management burden. It doesn't decrease it. It isolates behind different hypervisors the storage associated with those hypervisors. That makes it more difficult for me to do my job. It doesn't make it easier. I'm going to have to hire more monks because the Xerox machine is broken, for those of you who are old enough to remember that commercial. When the Xerox machine breaks you  gotta hire more monks. Anyway, I find the whole value case behind software defined to be a real question mark.

I'd like you to drill into specifically how software-defined storage plays into that scenario.

Toigo: Their idea is that you get one throat to choke. If you're doing VMware and VMware adds another software layer to their stack of software for server virtualization that now does storage virtualization, it will also allow software-defined storage. Then you just connect some pre-certified rigs or some generic gear behind your server that's running VMware and all the management of that storage is done by VMware. Now, on its face, if you're a homogeneous hypervisor shop, [and] you're only doing VMware across all your platforms, it might make sense.

And, of course, we're going back to the days of 1987. I remember I had Sun and I had Microsoft in the same shop. I needed to trade data between them -- this before there was Samba which is a program that lets you do that -- and I called up [someone at] Microsoft and said, "How do I trade data between these two platforms?" And he said, "Oh, that's easy. Get rid of all the Sun stuff." Call up VMware and you say, "I have Hyper-V and I have VMware, how am I going to exchange data between these two isolated repositories of data behind each of the hypervisors?" And VMware will say, "Oh, that's easy. Get rid of all the Hyper-V and just go to VMware." I don't know that I want to allocate to a vendor that kind of control over my infrastructure decisions. I really don't. That's a question mark for me. Other people have no problem with it. They're sold on the VMware value proposition; they're going to go just that route, fine. If you're happy with that, be prepared to deal with the vicissitudes of a lock-in.

+ Show Transcript

Join the conversation

1 comment

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What problems have you run into with your software-defined architecture?
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close