This article can also be found in the Premium Editorial Download "Storage magazine: Slimmer storage: How data reduction systems work."
Download it now to read this article plus other related content.
But a new concept has appeared on the horizon, one that some might lump into this convergence category yet it’s factually different. At Taneja Group we call this concept hyperconvergence.
Hyperconvergence blurs the lines between compute, network, storage and data protection, and almost creates an “infrastructure in a box” particularly for small to mid-level enterprises with limited IT expertise. Let’s look at the drivers bringing this concept to life.
Compute power is so inexpensive today that most storage arrays have tons of it to spare after storage processing is satisfied. There’s no point in wasting this power, especially when many unstructured data analytics prefer compute and data to be close together and not separated by latency-inducing networks. Virtualization is now a well-understood concept for far more technologies than the server, and storage virtualization can eliminate the need for a traditional SAN. Storage functionality is now easily defined in software and can itself run as a virtual machine (VM). Finally, inexpensive flash technologies that can be installed right next to compute and still act like storage fundamentally change the availability of IOPS. When you map all these drivers with most users wanting to manage their entire IT through a VM lens -- rather than separately for storage, compute, applications and so on -- you can see why hyperconvergence is here to stay.
Hyperconvergence is much more than interoperability-certified units; it’s a true amalgamation of compute, storage, networks and data protection. Imagine being able to apply a QoS to a VM and then have everything else managed by the system. The entire unit is purchased in chunks, but the first chunk is enough for you to get the QoS you need for the initial set of applications. Then you add more chunks when you add more VMs. You don’t know or care that the new chunk has compute and storage, or whether the latter is DAS or something else. You only look at the external capability of this “infrastructure in a box.”
The three offerings currently available are very different from each other. Nutanix and SimpliVity assume the presence of VMware, whereas Scale Computing, designed for the somewhat smaller customer that may not have gone down the path of server virtualization yet, is designed around KVM. For this class of customer, what will matter are all the new benefits that come from server virtualization atop all the hyperconvergence benefits. Given its background, SimpliVity’s founders started with data deduplication and compression, and ensured that data is collapsed to its smallest form inline at the time of creation and without any performance impact. Nutanix, on the other hand, doesn’t have built-in dedupe capability. There are other differences, such as cloud support on the back end, and the use of the box as storage for existing servers and so on. But outside these differences there are more than enough conceptual similarities.
Just when we think an area of computing has stabilized, something changes so fundamentally that we have to rethink how a job should be done or how a business should be run. In the last decade, we’ve seen many such fundamental paradigm shifts. Keep an eye on this last one, as it’s almost certain to affect you.
BIO: Arun Taneja is founder and president at Taneja Group, an analyst and consulting group focused on storage and storage-centric server technologies.
This was first published in October 2012