Toigo Partners International
Published: 27 Feb 2014
Could a software-defined infrastructure, with software-based controls and policies, be the answer to managing and allocating storage? Jon Toigo has his say on the subject.
Few columns here elicited as much pushback from industry insiders as my tongue-in-cheek rant about software-defined storage. In case you missed it, I argued, first, that storage infrastructure should always be defined by the application software that will use it and, second, that software-defined was just the latest meaningless market speak from an industry that changes up the rhetoric every six months in an attempt to sound fresh. While many techs agreed with me, marketing folks took umbrage. I dismissed their whining … er, criticisms … until I had occasion to chat with a chief technologist at a storage virtualization software company who gave me a somewhat different view. It changed my thinking, so I figured it would be good to share it here.
There's a theory gaining traction in some circles that the current flirtations with software-defined infrastructure, clouds and server virtualization aren't just knee-jerk reactions to economic pressures to cut IT costs, but a trend that has been in the works for a lot longer. Over the last few decades, we've fielded a lot of technology in a haphazard way, with little attention to its proper fit with business application requirements.
Case in point: IBM says we have way too much Tier-1 storage because everyone who's fielding a new app wants it to shine, so they host its data on low-capacity, high-performance storage whether that's necessary or not.
Moreover, we've allowed vendors to sell us what they want to sell us rather than what we need. This has, in turn, made management of heterogeneous infrastructures a nearly impossible task. Without management, there's only inefficiency and the need for rapid expansion of capacity until we're oversubscribed and underutilized.
That in turn enables vendors to sell ideas like unified storage (purchasing all storage from a single vendor, aka lock-in) and to make persuasive cases for "value-add" software such as compression, deduplication and so on that contribute little strategic value but a lot of revenue for the vendor.
Eventually, the hardware itself becomes commodity -- all parts made in Taiwan, Thailand or Singapore -- and the value-add software becomes the mainstay of storage vendor revenues. Try reselling a NetApp filer on eBay; the software isn't transferable with the kit.
If software is the only value-creating element of the kit, then separating storage software from hardware makes sense. Storage is now a software function or should be; nobody makes money from selling hardware anymore.
If you're still tracking, this thinking leads inevitably to the conclusion that software-defined storage has a different meaning than what I critiqued in past columns. Storage is a set of services that need to be intelligently allocated to application data based on business rules and application accessibility requirements. Capacity is a service, as is performance, as is protection, as is retention. Software is increasingly used to carve up storage to create pools offering different combinations of service elements that are appropriate for different kinds of data. To the extent that the carving up is inhibited by hardware, we still operate in an old world of hardware vendor-defined storage. Conversely, to the extent that we've gone beyond array bezel logos and abstracted services away from proprietary kit, we're closer to the realm of software-defined storage.
This is important to understand if you want to build a dynamic data center going forward, one that can turn on a dime to provide the right kinds of services to business processes in a fast-paced 24/7 world. At least, that's the story my friend the chief technologist is telling.
This isn't to say that the storage application has become simpler, or that it's analogous to an automatic coffee machine that enables the user to push a button for the kind of storage desired; another button for a large, medium or small serving; one for the proper amount of performance; and yet another button to sweeten with the right amount of data protection.
While that kind of Starbuckification of storage is possible today with storage virtualization software products like DataCore's SANsymphony-V, unlike the automatic coffee machine, its use is hardly drool-proof. You need to actually know something about technology to provision the right kind of storage, and you need to know something about storage to allocate it intelligently. The Starbuckification idea, espoused by many cloud service providers, puts non-technical users in charge of allocating their own services, which in my experience, is not a good idea.
Software is and remains a tool. Our storage needs to be managed and allocated by intelligent humans, with software-based controls and policies serving as a more efficient extension of our ability to translate business needs into automation support. That's another way of saying that things are as they always have been.
Maybe what we really need are smarter humans to better use the much-improved storage application.
About the author:
Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.
- Success Story: Com Hem and Scality RING –Scality
- Surviving the 21st Century Data Storage Deluge –Scality
- CW500: A roadmap to software-defined everything – Paddy Power Betfair –ComputerWeekly.com
- CW ANZ: Taming the data beast –ComputerWeekly.com