Don’t let yourself be dazzled by bright lights and other storage bling -- the hardware might be cool to look at,...
but it’s the software that’s going to make a difference.
It’s always geeky fun to dig down and explore the nuances among storage hardware products, but I’m finding those discussions increasingly irrelevant. To paraphrase an insight offered by fictional Cosmonaut Lev Andropov in the 1998 movie Armageddon, “Components. American components, Russian components, all made in Taiwan!”
Simply put, all disk drives in enterprise arrays come from one of three sources. The chassis or trays into which drives are mounted come from one of four or five providers. Cable harnesses, connectors, adapters and so on all come from the same suppliers. Even the cabinets into which these components are mounted come from a common supply chain, with the key difference in their cost to vendors, I’m told, being color. Black is the most expensive choice.
That makes the bezel plate on the front of the box, customized with a logo and sometimes tricked out with neon lights, the real hardware differentiator.
Pardon me while I yawn.
Look, storage just isn’t sexy. That seems to be by design, dating back to the late ’50s/early ’60s when the IBM RAMAC guys discovered how to make disk platters rust (using the same paint that produced the pukey reddish color on the Golden Gate Bridge) so you could write data on them.
Storage just isn’t supposed to be to information technology what haute couture is in fashion. This devil doesn’t wear Prada.
The best arrays don’t require a lot of special care or hand washing. They’re designed to be hit hard, to deliver performance consistently and to demonstrate durability over their service life. Storage is supposed to be as reliable as a beat-up, all-weather work truck, not a slick luxury car that requires a thousand dollar tune-up every week it’s driven in traffic.
Metaphor limit exceeded, I can now get to the reason for this piece. Typical of this season in the marketing calendars of the storage industry, I’m being inundated with email and phone requests from vendor PR flacks seeking to brief me on their product lineups for the new year.
One told me they had improved on EMC’s VMAX architecture with their forthcoming rig, something that piqued my interest ever so slightly. So, I listened to the rest of the pitch: The vendor’s new rig will be tricked out with LED lights that make the unit “appear to breathe” as it processes I/O -- a much more dynamic effect than EMC’s static blue neon glow tubes.
I was floored. Killjoy that I am, my mind moved immediately to a consideration of all of the coal-fired power plants generating the electricity used by these breathing lights, and their airborne emissions that are linked to asthma attacks in children at recess in nearby school yards. Does anyone else think that making the array appear to breathe may be ultimately less important than whether my kids can breathe?
The PR flack took my laughter as encouragement and stressed that this feature was really, really important, since embellishing arrays with lights seems to get a positive response from otherwise jaded storage buyers. If true, I’m forced to conclude that IQs among storage buyers have dropped sharply since I last checked. That thought threw me into a funk over the holidays.
Ultimately, I’ve consoled myself with the idea that the real storage “infrastruggle” this year will have little to do with “breathing lights.” As I read the tea leaves, the battle over storage is shaping up on two distinct fronts. The first one is implied in the notion that hardware doesn’t count as much as software anymore. Storage decisions will increasingly focus on software functionality and compatibility.
As hardware becomes increasingly commoditized, software functionality is being added, either to array controllers in an effort to differentiate the common products of one vendor from those of another, or into an abstraction layer somewhere above and outside of the hardware itself to facilitate sharing of functionality across all rigs. Choosing between these strategies is one battle I expect to see many firms wage in 2012. Much ink will be spilled over the relative value of smarter storage arrays vs. smarter storage infrastructure, probably with VMware bringing the issue to a head with the introduction of its variant of storage virtualization hypervisors.
Another closely related battle will have to do with compatibility. Both of the aforementioned storage software approaches carry with them heavily nuanced issues of compatibility where an incorrect choice could well result in an expensive balkanization of storage. Storage array x may not work with storage virtualization software y, or even with application software z. This debate will cause a lot of skirmishing and bog down the prosecution of that front of the infrastruggle.
That, in turn, may well drive the other front in the storage war, which will take the form of a classic battle of homegrown storage solutions vs. outsourcing. Public storage clouds will be appealing to companies that (1) have laid off most of their storage-savvy techies through austerity budgeting; (2) have continued to grow their volume of data in unmanaged ways; and (3) are willing to believe the “marketecture” around “cheap, no maintenance, silver bullet disk drives in the sky” services. Hopefully, networking folks will be able to re-educate decision makers regarding the vicissitudes of latency and jitter in wide-area networking, correcting for the disinformation manifest in the product pitches of the cloudies and their backers among the deduplication software peddlers and telco service providers. The fact that smaller payloads and greater bandwidth don’t alter the physics of pushing electrons over distance on a shared pipe really needs more “illumination,” certainly more than the cabinets in which the disks are stored.
Assuming that anyone wants to consider fact-based analyses anymore, advancing a physics-based argument, supplemented by a hard-headed cost and risk assessment of cloud storage, might also stabilize the lines of the second front of the storage infrastruggle. That would give us a bit more time to consider strategically what needs to be done to win the real war of storage efficiency.
BIO: Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.