Hyper-converged infrastructure options simplify virtual environments
A comprehensive collection of articles, videos and more, hand-picked by our editors
Convergence -- the bundling of storage, compute, network and virtualization -- is already evolving with new products...
that redefine ease of use.
When the concept of convergence came on the scene a couple of years ago, I wrote a column expressing my concern that if convergence was taken too far we could return to the proprietary days of the 1970s. Back then, a customer bought everything -- compute, network, storage, database and, in most cases, applications -- from a single vendor. And you were locked into that vendor practically forever. Open systems changed that and a more piecemeal approach arose over the intervening years. EMC built its multibillion-dollar business on selling best-of-breed storage that attached to servers from Dell, HP, IBM or any other vendor. That method has served the market well for three decades.
Convergence continues to take shape in the market, with offerings from Cisco, Dell, EMC, HP, IBM, NetApp and VMware, just to name some of the major players. To be fair, these integrated systems have brought simplicity to the purchasing process and improved time to value. I also think they’ve simplified management with a single-pane-of-glass console.
But a new concept has appeared on the horizon, one that some might lump into this convergence category yet it’s factually different. At Taneja Group we call this concept hyperconvergence. According to our analysis, only three companies currently fit into this category: Nutanix, Scale Computing and SimpliVity. So how is hyperconvergence different, what is its value and who benefits?
Hyperconvergence blurs the lines between compute, network, storage and data protection, and almost creates an “infrastructure in a box” particularly for small to mid-level enterprises with limited IT expertise. Let’s look at the drivers bringing this concept to life.
Compute power is so inexpensive today that most storage arrays have tons of it to spare after storage processing is satisfied. There’s no point in wasting this power, especially when many unstructured data analytics prefer compute and data to be close together and not separated by latency-inducing networks. Virtualization is now a well-understood concept for far more technologies than the server, and storage virtualization can eliminate the need for a traditional SAN. Storage functionality is now easily defined in software and can itself run as a virtual machine (VM). Finally, inexpensive flash technologies that can be installed right next to compute and still act like storage fundamentally change the availability of IOPS. When you map all these drivers with most users wanting to manage their entire IT through a VM lens -- rather than separately for storage, compute, applications and so on -- you can see why hyperconvergence is here to stay.
Hyperconvergence is much more than interoperability-certified units; it’s a true amalgamation of compute, storage, networks and data protection. Imagine being able to apply a QoS to a VM and then have everything else managed by the system. The entire unit is purchased in chunks, but the first chunk is enough for you to get the QoS you need for the initial set of applications. Then you add more chunks when you add more VMs. You don’t know or care that the new chunk has compute and storage, or whether the latter is DAS or something else. You only look at the external capability of this “infrastructure in a box.”
The three offerings currently available are very different from each other. Nutanix and SimpliVity assume the presence of VMware, whereas Scale Computing, designed for the somewhat smaller customer that may not have gone down the path of server virtualization yet, is designed around KVM. For this class of customer, what will matter are all the new benefits that come from server virtualization atop all the hyperconvergence benefits. Given its background, SimpliVity’s founders started with data deduplication and compression, and ensured that data is collapsed to its smallest form inline at the time of creation and without any performance impact. Nutanix, on the other hand, doesn’t have built-in dedupe capability. There are other differences, such as cloud support on the back end, and the use of the box as storage for existing servers and so on. But outside these differences there are more than enough conceptual similarities.
Just when we think an area of computing has stabilized, something changes so fundamentally that we have to rethink how a job should be done or how a business should be run. In the last decade, we’ve seen many such fundamental paradigm shifts. Keep an eye on this last one, as it’s almost certain to affect you.
BIO: Arun Taneja is founder and president at Taneja Group, an analyst and consulting group focused on storage and storage-centric server technologies.