3dmentat - Fotolia
I thought I was going to learn all about servers, but I got a lesson in storage instead.
I attended the recent Open Server Summit conference down in Silicon Valley, figuring that it was a great opportunity to become a little smarter about server architectures and designs. There was no shortage of new stuff for me to try to cram into my personal nonvolatile memory (read: brain), but I was surprised that so many of the new developments in server techs were related to storage. In fact, if I closed my eyes and imagined that I was at the Open Storage Summit instead, the presentations I heard on servers and storage would've made just as much sense.
At first, I thought it was a little odd that so much of the talk at a server conference was about storage, but the reasons sunk in pretty quickly: convergence and solid-state. Because convergence is so tightly linked to the abstraction of controlling software from physical devices such as servers and storage, it's ironic that that decoupling actually puts greater focus on the hardware in a number of ways. Pretty much all of the elements that make convergence work -- automation, agility, low latency and so on -- require some pretty sophisticated hardware underneath that convergent upper layer.
Commodity hardware has value
In the software-defined everything world, the hardware -- regardless of whether it's storage, server or networking gear -- is often referred to as "commodity" stuff. Webster's online dictionary defines commodity as: 1) "something that is bought and sold" and 2) "something or someone that is useful or valued."
Everything gets bought and sold, so that part of the definition doesn't shed any new light on convergence technologies, but the second one is spot on. The way "commodity" gets tossed around in convergence conversations, you might think that the label meant just the opposite -- something undistinguished and pretty unimportant. I understand that some champions of converged architectures feel a need to emphasize -- maybe over-emphasize -- the importance of the new, more powerful software layer. So, maybe, by trivializing the hardware, they think the software will stand out even more.
Personally, I think that's a misguided and potentially misleading approach. Not relying on proprietary hardware doesn't mean that you don't need a sophisticated, reliable, high-performance, scalable (and so on) assemblage of hardware products to bolster the software. You could have the greatest software in the world, but if it's running on a creaky kit, it won't seem all that great. Look, all IT hardware has always been software-defined; the latest wave is just another step in reducing the need for proprietary hardware tweaks.
Another reason why there was so much talk about storage -- and networking for that matter -- at a server conference is that, as we rely more on the software than on hardware hacks, it makes it easier to bring servers and storage and networks closer and closer together. And in the world of IT, close is good. It's getting harder and harder to talk about one of these data center pillars without also bringing the other two into the conversation.
Think hyper-converged infrastructure.
Flash storage crucial to convergence
But there is a "pure" storage technical development that -- in my not-so-humble opinion -- has been one of the key motivators for the software-defined data center movement: NAND flash. Solid-state storage hasn't just accelerated storage systems; it has also enabled a variety of architectures involving servers and storage that provide the flexibility and agility to leverage software to meet the requirements of a slew of different use cases. Flash, along with multicore processors, makes it possible to build converged systems that can offer improved performance without the need to cobble together special hardware devices. Flash can take up the performance slack of general-use hardware, which makes using those "commodity" parts feasible.
So what's really happening in the software-defined realm is that the "commodity" hardware is getting more and more sophisticated and efficient, and is therefore able to do what other proprietary parts used to do. That means "proprietary" is getting shoved down stack to the component level. Once again, the beauty of software-defined is being bolstered by hardware.
About the author:
Rich Castagna is TechTarget's VP of Editorial.
Storage, servers converge in turnkey systems
Provision storage for virtual server environments
Tools for managing data storage in virtualized server environments
- Kapsch Takes Consumption-Based Route to Data Center Refresh –Hewlett Packard Enterprise