Without centralized storage management and administration, data storage can devolve into an exercise in anarchy. Jon Toigo provides his thoughts on the subject.
There's an old joke about the parts of the human body arguing over who should be the boss. Ultimately, the brain argues that it should be the boss, since it gives purpose and direction to all the other parts. The other body parts agree, except for one that, for the sake of propriety, I will refrain from naming. This body part makes the case that, if it ceases operation, the system will suffer and the rest of the body, including the brain, will become feverish. The punch line is that you don't need brains to be the boss, you just need to be that body part -- a sort of condemnation of bosses everywhere.
In the realm of storage, we haven't yet reached that level of agreement in determining which component should provide the smarts of the overall infrastructure. And everyone seems to want to be the boss.
In early mainframe shops, systems-managed storage (SMS) established the brains of storage operations at the operating system layer. Hardware that attached to the backplane of the box, via bus and tag cabling, or later ESCON, FICON, et al., needed to comply with standards enabling its centralized administration and management. Hierarchical storage management provided a complementary facility for managing data across infrastructure based on data class, storage class and definable rule sets.
That modality held up until EMC and a few others started adding RAID controllers, local caching and management systems to their arrays, eroding the centralized storage management imposed by the mainframe OS. Things got more decentralized as server-based computing garnered more and more raised floor turf, nudging out big iron mainframes.
Pretty soon, we saw the emergence of the storage infrastructure we confront today -- barely able to be considered infrastructure at all, given the appalling lack of coherent administration and management. In a most anarchical state, all storage rigs want to be the boss. Array controllers have evolved into servers. Boot an EMC VMAX and you'll see a fleeting Windows Server 2008 R2 logo and copyright announcement; EMC's Clariion replacement sports a "look quick or you will miss it" Microsoft Windows 7 logo. Whether they're running Windows or a Linux variant of some type, arrays sport a functional server where their RAID and cache controller used to be, providing the "brains" for delivering services on their rig ranging from the mundane (putting data, getting data) to the special (encryption, deduplication, internal tiering, mirroring to like arrays, snapshot, thin provisioning and so on).
In fact, it has proven to be these value-add services, rather than the array OS or commodity hardware componentry, that have provided the means for vendors to charge more year over year for their storage kit. Storage vendors like the anarchy because it lets them claim to have an edge over their competitors courtesy of their product's "smartness."
But anarchy isn't that good for consumers. Wrangling a coherent infrastructure from a bunch of arrays, all of which want to be the boss, is nearly impossible. Use storage resource management software and you quickly find out how vulnerable your software provider is to the willingness of the hardware kit makers to share their proprietary APIs. Everyone could be using standardized management connections, like RESTful management stacks from the open World Wide Web Consortium (W3C), like X-IO, but that would level the playing field and reveal the simple truth that everyone is selling the same gear. No one wants to do that.
Last year, a leading analyst firm said the only way to reduce the costs of administering and managing storage infrastructure -- which is a 4x-to-6x multiplier of hardware acquisition expense -- was to buy all our kit from a single vendor. But that claim doesn't stand up to the rigors of testing: Most larger vendors may have a range of storage kits to meet all the needs of a contemporary enterprise, but they typically don't provide universal cross-platform management for all their gear. Some of their hardware came from third-party acquisitions, and maybe some products are simply re-branded gear from another vendor. The point is that a common management scheme hasn't been considered, let alone designed into all the systems.
On-rig "value-add software," for whatever added smartness it's touted to deliver, also makes common management more difficult or impossible in some cases. What happens when you want an accurate capacity report, but some of your infrastructure has embedded "thin provisioning" that distorts its actual available capacity? You get the idea.
The really stupid thing is that the "value-add brains on array" concept is now finding its way into architectural discussions of storage infrastructure. A couple of years ago, VMware sought to address its terrible I/O performance by creating special commands (VAAI) in SCSI that would "offload" functions like back-end mirroring to the array controllers that had the capability to handle these functions themselves. The vendor said this would offload up to 20% of the I/O workload it was currently (mis-)managing. In the process, this idea embedded the concept of value-add functionality at the array level into storage infrastructure planning and management.
This makes no sense except to perpetuate the storage sales model that makes it similar to a great degree to the business model found in cocaine sales. (The more you cut and dilute the product, the more money you make from its sale.)
Next month, we'll take a look at alternatives to storage infrastructure management anarchy -- the challenge created in managing infrastructure in which all the components want to be boss. To paraphrase the old joke, you don't have to have brains to be a boss, you just need to sell some value-add software with your kit.
About the author:
Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.