Evaluate Weigh the pros and cons of technologies, products and projects you are considering.
This article is part of our Essential Guide: Software-defined storage market: Fact or fiction

Smarter hardware will make software-defined technology work

Some vendors are making hardware sound like an afterthought, but software-defined technology depends on hardware innovation.

This article can also be found in the Premium Editorial Download: Storage magazine: Six hot data storage trends for 2015

Here's a short quiz:

In a virtualized data center:

A. Storage is the problem

B. Storage is the solution

C. Storage is both the solution and the problem

D. None of the above

It's a trick question -- all of the answers are correct. That's because in most data centers the bottleneck that might be choking performance could be a moving target, given all the variables involved. Storage vendors are apt to suggest the network is the weak link, while the network crowd is quick to say storage is the sluggard. And both are likely to accuse servers of being the choke points with all those virtual machines (VMs) keeping the CPU pinned near 100% while draining every last bit and byte of memory.

So, it's kind of an "all of the above" situation, depending on your particular infrastructure, the applications you're running and your performance expectations. Any slightly past-its-prime storage array/application server/network switch could be the culprit, which makes it easy to pin the blame on hardware. If performance is lousy, there must be a clunky bit of hardware behind the slowdown, right?

Well, right or wrong, that's the idea vendors of all stripes are apparently having a lot of success convincing many of us of. If there's a problem, hardware is the nemesis: Hardware bad. Software good.

The whole software-defined technology movement is based on that kind of thinking. Put a layer between users, their apps and the hardware, and the problem is solved. Hardware becomes less important -- less of an issue -- and we gain all kinds of flexibility and agility because the software doesn't care about all that hardware toiling away underneath.

I can see how people would want to believe that. Odds are your days are filled with battling both hardware and software. So if you could eliminate one of them -- well, sort of eliminate – wouldn't life be easier?

Software-defined advocates are likely to argue that adding a new layer of software that puts some distance between you and the hardware simplifies operations, saves money and reduces the reliance on hardware products. To that I say: Maybe, maybe and maybe.

For me, the least-convincing argument for software-defined whatever is the one that seems to be mentioned most often by vendors: "It's the same type of technology Google and Facebook use." Now isn't that convincing? I'm sure your company has about a billion servers like Google and Facebook, a few billion square feet of data center to house them, and a million or so engineers on hand to assemble all the required parts. How many companies even come close to "Web-scale" as the marketers like to say?

The other dent in the software-defined litany is the idea that adding a layer that wasn't there before will solve everything. Sure, it can provide an easier user interface, and maybe eliminate some of the clumsier configuration gymnastics that tend to contort even veteran storage jockeys. But even with a slick top layer added, you'll still have to get under the hood from time to time, so maybe you won't be all that removed from the hardware after all.

But I think the strongest evidence that storage and other hardware isn't about to disappear or become less important is that the whole software-defined thesis -- whether it's storage or networks or servers -- relies on one key condition: that hardware continues to develop and get faster, bigger and better.

We wouldn't be talking about virtualized servers if Intel hadn't cooked up multi-core CPUs at a hyper-Moore's Law pace. Or if networks didn't skip along from 1 Gbps to 10 Gbps to 25 Gbps and 40 Gbps. And it's hard to imagine anything remotely approaching software-defined storage if flash hadn't burst upon the scene a few years ago and then developed into more form factors than we had ever seen before.

Wonder why VMware requires flash in the servers it endeavors to turn into storage arrays with its Virtual SAN product? Maybe it's because without that advanced storage hardware the software-defined storage array might not deliver sufficient performance. And now VMware is trying to bring its software-defined storage to a wider market under the EVO:RAIL moniker by partnering with hardware vendors.

Still, most software-defined storage products are still quite limited in the number of nodes and capacity they can provide, and also limited in delivering performance. But that will change, because storage hardware is getting better.

And it's not just a matter of the hardware getter faster; it's also getting smarter. Intel is churning out chips tweaked and tuned for specific environments and use cases. Storage, too, is getting smarter. One of the reasons software-defined storage can forsake hardware controllers for software versions is that a lot of that intelligence is now baked into the media, especially solid-state devices.

So it doesn't matter if you think storage is the problem or the solution. Let's just hope storage vendors continue on their development paths and keep making storage devices that get smarter and smarter, because the future of software-defined data centers will rely on intelligent hardware.

About the author:
Rich Castagna is TechTarget's VP of Editorial/Storage Media Group.

This was last published in December 2014

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Rich, so much to agree with! We (Zadara Storage) can say from first-hand knowledge that Software Defined Storage wouldn't exist if it weren't for advancements in off-the-shelf hardware - especially CPUs, HDDs, SSDs, and Networking. I think the point we SDS people are trying to make is that given how good *standard* hardware is, it no longer makes sense to invest in *proprietary* hardware.

As for the additional layer you decry, not all SDS companies do this. Again, take us as an example: we provide access to actual hardware.
Cancel
It's true that some traditional storage hardware has been proprietary, but much of it has been the same off-the-shelf technology that you refer to. Of course that varies from vendor to vendor, and from system to system. What doesn't vary is that the software is proprietary--just as it is with software-defined storage--so the lock-in is still there regardless of what you call the hardware.
Cancel

-ADS BY GOOGLE

SearchDisasterRecovery

SearchDataBackup

SearchConvergedInfrastructure

Close