DrHitch - Fotolia
Here's a short quiz:
In a virtualized data center:
A. Storage is the problem
B. Storage is the solution
C. Storage is both the solution and the problem
D. None of the above
It's a trick question -- all of the answers are correct. That's because in most data centers the bottleneck that might be choking performance could be a moving target, given all the variables involved. Storage vendors are apt to suggest the network is the weak link, while the network crowd is quick to say storage is the sluggard. And both are likely to accuse servers of being the choke points with all those virtual machines (VMs) keeping the CPU pinned near 100% while draining every last bit and byte of memory.
So, it's kind of an "all of the above" situation, depending on your particular infrastructure, the applications you're running and your performance expectations. Any slightly past-its-prime storage array/application server/network switch could be the culprit, which makes it easy to pin the blame on hardware. If performance is lousy, there must be a clunky bit of hardware behind the slowdown, right?
Well, right or wrong, that's the idea vendors of all stripes are apparently having a lot of success convincing many of us of. If there's a problem, hardware is the nemesis: Hardware bad. Software good.
The whole software-defined technology movement is based on that kind of thinking. Put a layer between users, their apps and the hardware, and the problem is solved. Hardware becomes less important -- less of an issue -- and we gain all kinds of flexibility and agility because the software doesn't care about all that hardware toiling away underneath.
I can see how people would want to believe that. Odds are your days are filled with battling both hardware and software. So if you could eliminate one of them -- well, sort of eliminate – wouldn't life be easier?
Software-defined advocates are likely to argue that adding a new layer of software that puts some distance between you and the hardware simplifies operations, saves money and reduces the reliance on hardware products. To that I say: Maybe, maybe and maybe.
For me, the least-convincing argument for software-defined whatever is the one that seems to be mentioned most often by vendors: "It's the same type of technology Google and Facebook use." Now isn't that convincing? I'm sure your company has about a billion servers like Google and Facebook, a few billion square feet of data center to house them, and a million or so engineers on hand to assemble all the required parts. How many companies even come close to "Web-scale" as the marketers like to say?
The other dent in the software-defined litany is the idea that adding a layer that wasn't there before will solve everything. Sure, it can provide an easier user interface, and maybe eliminate some of the clumsier configuration gymnastics that tend to contort even veteran storage jockeys. But even with a slick top layer added, you'll still have to get under the hood from time to time, so maybe you won't be all that removed from the hardware after all.
But I think the strongest evidence that storage and other hardware isn't about to disappear or become less important is that the whole software-defined thesis -- whether it's storage or networks or servers -- relies on one key condition: that hardware continues to develop and get faster, bigger and better.
We wouldn't be talking about virtualized servers if Intel hadn't cooked up multi-core CPUs at a hyper-Moore's Law pace. Or if networks didn't skip along from 1 Gbps to 10 Gbps to 25 Gbps and 40 Gbps. And it's hard to imagine anything remotely approaching software-defined storage if flash hadn't burst upon the scene a few years ago and then developed into more form factors than we had ever seen before.
Wonder why VMware requires flash in the servers it endeavors to turn into storage arrays with its Virtual SAN product? Maybe it's because without that advanced storage hardware the software-defined storage array might not deliver sufficient performance. And now VMware is trying to bring its software-defined storage to a wider market under the EVO:RAIL moniker by partnering with hardware vendors.
Still, most software-defined storage products are still quite limited in the number of nodes and capacity they can provide, and also limited in delivering performance. But that will change, because storage hardware is getting better.
And it's not just a matter of the hardware getter faster; it's also getting smarter. Intel is churning out chips tweaked and tuned for specific environments and use cases. Storage, too, is getting smarter. One of the reasons software-defined storage can forsake hardware controllers for software versions is that a lot of that intelligence is now baked into the media, especially solid-state devices.
So it doesn't matter if you think storage is the problem or the solution. Let's just hope storage vendors continue on their development paths and keep making storage devices that get smarter and smarter, because the future of software-defined data centers will rely on intelligent hardware.
About the author:
Rich Castagna is TechTarget's VP of Editorial/Storage Media Group.
- CW ANZ: Taming the data beast –ComputerWeekly.com
- IT Definitions: Storage Special –ComputerWeekly.com
- CW500: A roadmap to software-defined everything – Morgan Stanley –ComputerWeekly.com
- Computer Weekly – 25 September 2018: Mapping the future at Ordnance Survey –Oracle Cloud
Dig Deeper on Software-defined storage
Six software-defined storage architecture mistakes to avoid
Market for software-defined systems due for a correction
SDS, HCI and CDP are key to dream enterprise storage system
Veritas: the secret sauce is a ‘smarter’ storage