Data storage technologies keep getting better, but storage vendors may just be up to their old tricks.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
It seems somehow strange to be publishing my inaugural column in Storage magazine’s last issue of 2011. So much ink has already seen paper (or pixels have been seen on screens) over the last 11 months: articles and columns expressing findings, theories and opinions about the value and benefits of, or limitations and challenges posed by, contemporary data storage products and the way we use them today. The situation sets the bar pretty high for another voice entering the fray to contribute something that will add real value to the discussion already in progress.
Equally disconcerting is the fact that I’m writing this column in a hotel room in Newton, Mass., not far from the Hopkinton home of the storage hardware market share leader, EMC. Anyone familiar with my work knows my views, which tend to be quite critical of both EMC’s products and their marketing and sales techniques. These views largely pre-date the burgeoning “gilded age” of storage in which a handful of behemoth vendors have gobbled up most of the smaller fry, reducing the options available to consumers for solving storage challenges in the process. I’ve had other axes to grind with Hopkinton over my 30-year-plus career in IT, and have been open and vociferous in expressing my views. Frankly, I was surprised and intrigued to be offered a column in this magazine under the circumstances since the advertising revenue that keeps the publication afloat comes from precisely those vendors with whose products I so frequently find fault.
All of the above notwithstanding, I thank you, readers, for indulging me with some of your time and I promise to try not to waste any of it. I understand the challenges that most of you are facing. You’re shouldering the work of what used to be many, tasked with delivering ever higher service levels to your businesses with ever shrinking budgets, and at the same time endeavoring to wrangle a bunch of disparate storage technologies into some sort of coherent and manageable resource.
I read recently that more than a trillion transactions traverse the wires, cables and wireless spectrum of a medium-sized business data center every day. Those are a trillion miracles -- photons, electrons and radio waves that successfully complete round trips across the most hostile environments imaginable, carrying requests for data and returning responses to users and applications -- and you are the miracle workers who make them happen.
Of course, there’s no recognition of this and no time to rest on one’s laurels even if you were recognized. Truth be told, life is hard in storage land and it’s about to get a lot harder.
With the advent of the new gilded age, we’re seeing vendors return to old tricks, like isolating value-add functionality on array controllers where it can be used to lock in consumers and lock out competitors. In general, this design approach has limited merit because it inhibits cross-platform manageability, increases data routing complexity, drives up the cost of commodity array components, adds what are usually obscene software licensing costs to the kit, and introduces greater risk of failure into the data center environment. Embedded value-add software, vendor engineers have told me, is usually poorly validated because the interoperability test matrix is simply too complex and time consuming to undertake and complete. Thus, it should come as little surprise that the 3,000 companies polled by CA Technologies earlier this year reported they had accrued more than 127 million hours of downtime last year, partly due to storage-related outages ("The Avoidable Cost of Downtime," May 2011, CA Technologies).
And, in this gilded age, it goes without saying that a proper behemoth storage vendor must also offer a “cloud architecture.” Using proprietary hardware/software stacks, each one is seeking to advance its own mainframe “mini-me” that combines a server hypervisor with a set of network protocols supported only by the vendor’s own switches and its “signature” value-add storage gear to deliver the ultimate “one-stop-shop.” They have enlisted analysts to preach the gospel of “single source” again. One Forrester analyst recently wrote that buying all technology from one vendor is “the only real way” to drive cost out of storage. That couldn’t be further from the truth, but it’s the mantra vendors are humming into the receptive ears of non-technical decision-makers in the front office of every firm I visit today. Many pine for the orderliness of the IBM mainframe data center of the 1970s while forgetting how the loss of leverage over a vendor translated to extraordinary capital costs in hardware and software, and delays in obtaining needed fixes and changes to products. We’ve forgotten the lessons of the past and are poised to repeat them.
Or maybe not. I intend to occupy this corner of Storage, the magazine, for the foreseeable future, and I invite everyone who’s interested to bring a tent, sleeping bag and their favorite storage issues so we can start an entirely new discussion: one about fixing the broken storage model before it fixes us. This gilded age promises to benefit the 1% of companies that still have deep pockets and unlimited budgets for buying storage. The other 99% need to start exploring viable alternatives.
BIO: Jon William Toigo is a 30-year IT veteran. He is CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.