There's more to storage infrastructure management than ensuring performance and scalability needs. New architectures are doing a better job of managing storage resources.
We've been on a multi-decade crusade to address performance and basic storage management tasks to handle things such as protecting data in place, and scaling and expanding our data storage systems to meet new requirements. But today, when performance and scaling and expansion issues are addressed, it will be revealed that the last major challenge in the data center is storage management.
Storage management is a massive challenge, and the enormity and complexity of this task are why it always seems to be addressed last, after performance and other core storage features. But just because it's often overlooked or given a low priority doesn't mean it's not important. In fact, the level of efficiency with which storage is managed can make or break a data center.
Today, midrange storage systems run in the neighborhood of $3 to $5 per gigabyte at street prices. But storage management has always incurred a far greater cost when calculated on an annual basis. While advancements have lowered those costs over the past couple of years, today management still adds another $5 to $10 per gigabyte, even topping the up-front cost of a storage system, especially when those costs are added up over a three- to five-year storage system lifespan.
For virtual infrastructures, the problem is even worse. There's a greater need for management of the virtual machine to physical storage interaction and, because of the density of workloads, there's also more of a need to manage data-in-place operations (snapshots, replication and so on). These two elements greatly increase storage complexity and the level of routine interaction with storage. Despite these needs, management is still too often overlooked or not valued strongly enough during the storage purchasing process.
Right now, the industry is in the throes of software-defined buzzology. Regardless of the specific software-defined (fill-in-the-blank) technology (SDx technology), the focus is on greater business agility through dynamic, on-demand adaptability and programmatic manipulation of the newly abstracted logical infrastructure that has been set free from its physical boundaries. It's something of a utopian vision, where we can get away from plugging things into each other only to find out that they don't work together. So it's easy to sit back and hope that SDx will solve all our management woes, but management is a far broader issue than programmatic manipulation.
If we miss the management boat this time, there's no place the enterprise will feel the pain worse than in its storage environment. If we don't tackle management at the outset, software-defined storage (SDS) could potentially scale into a tremendous nightmare. Logically and dynamically weaving connections together won't be very advantageous if you can't figure out and manage the connected resources.
But the outlook isn't all bad. I've never been fond of yet another layer for solving the management challenge. I prefer to think we should expect a comprehensive and well-integrated solution for our storage infrastructure without spending even more money on another product. There are a handful of vendors delivering practical SDS today -- even aiming for a more ambitious SDS tomorrow -- and some of those solutions are also tackling parts of the management challenge.
On one hand, there are solutions with an element of SDS, such as converged infrastructure and hyper-convergence products that are trying to reduce complexity and enhance manageability by more closely coupling hardware and applications. A few that stand out are Hewlett-Packard (HP)'s VirtualSystem/CloudSystem family with its recently announced and API-enabled OneView, Hitachi's Unified Compute Platform, IBM with PureSystems, Nutanix, SimpliVity and the VCE coalition.
There are also SDS solutions that aim to move storage entirely into software for reasons of portability, enhanced adaptability and/or complexity reduction. These solutions include virtual storage appliance (VSA) offerings from FalconStor, HP StoreVirtual, Nexenta, StorMagic and others, as well as VMware's own VSAN. By encapsulating the storage instance in the virtual infrastructure, these products can run anywhere, use otherwise stranded capacity, and often make use of unique virtual infrastructure integrations to make virtual storage management a bit less complex.
A few vendors, such as Gridstore and Tintri, are fundamentally rethinking storage integration in a somewhat more ambitious manner. Gridstore is breaking the storage controller apart from the storage capacity, so that storage functionality can be deployed closer to the application and be a bit more intelligent. We've had Tintri's Zero Management Storage in our test labs and saw how they have effectively taken the storage array entirely out of the equation by making everything virtual machine-centric, radically changing how storage is managed.
So, there is hope. Ultimately, however, it's up to you to determine whether it turns into a broad-based change in how storage is managed in our enterprises. If your storage vendor wants to talk SDS, tell them the conversation better start with what they're going to do for storage management.
About the author:
Jeff Boles is a senior analyst at Taneja Group.
- Intelligent Storage for the Evolving Data Center –Hewlett Packard Enterprise
- Cloud Storage for Primary or Nearline Data –SearchStorage.com
- Lower Your Data Center Costs With vSAN –VMware
- The Benefits of Edge Data Centers –Schneider Electric