What criteria are you basing your data storage equipment purchases on: a prestigious label or real-world requirements? Jon Toigo explains why we need to start testing gear again.
As I write this column, the furor is beginning to die down around a scandal involving a popular (and very expensive) brand of yoga pants. The microfiber material used to make these pants apparently becomes transparent when the yoga enthusiast bends over or otherwise imitates the choreography of a pretzel. In a remarkable bit of chutzpah, the pants vendor's CEO has promised to make things right and to provide refunds or exchanges to anyone who returns their flawed merchandise. She also recommends testing the new product before finishing the exchange by donning the pants, bending over and asking the attendant if there are any visibility issues. Funny as it sounds, customers regard the CEO's offer as reasonable and fair.
The Yoga Pants Test, as I have come to call it, should also apply to data storage equipment purchases. Like the malfunctioning clothing items, we're talking about some similarly overpriced gear in the case of storage -- all made of the same components and mostly in the same factories, but differentiated by brand name, just like yoga pants. With the clothing, the branding is probably a tag attached to the waistband. (I wouldn't know, as I wear sweats whenever I'm not in a suit.) While hidden, the tag still provides the wearer with bragging rights during post-Pilates socials over steaming soy lattes.
With storage gear, the brand is on the bezel plate, which no one outside the data center ever sees. Presumably, the logo gives an IT pro bragging rights whenever he or she is at a conference or seminar, or lurking on Twitter and speaking with other storage-interested folks. Even a casual reference establishes the owner as a serious player, if the gear is considered "enterprise class," that is.
What makes a rig "enterprise class?" As near as I can tell, it's the price tag for both the rig itself, and for its warranty and maintenance agreement. It used to be that enterprise class referred to the type of drive, like a 15K rpm Fibre Channel Seagate, used in the kit. But ever since Nexsan promoted itself to the front of the bus with its arrays of consumer SATA drives, and everyone else began embracing SAS drives, the drives themselves no longer provide a basis for caste.
If not the spinning rust, then what is it that makes a storage rig a member of the 1% rather than an outcast? Perhaps it was just that question, with its many nuances and complexities, that caused Gartner to re-establish (after a much deserved hiatus) its "Magic Quadrant" for storage arrays. Surely, Gartner can divine the truly enterprise class from the steerage class, even if its criteria might have more to do with how many Gartner services a vendor purchases rather than any empirical test data.
You can't blame Gartner entirely. Purchasers are also to blame for not establishing selection criteria for hardware linked to any real requirements. When was the last time you characterized your workload to determine what kind of storage it actually required? According to IBM, there's more "tier one" storage deployed today than any other kind of disk arrays, mainly because every IT maven on the planet thinks that his/her latest application deserves (not necessarily requires) enterprise-class hardware. That's seriously misguided thinking that plays directly into the whole game of fashion branding.
It's also the kind of thinking that's likely to break the budgets of many companies in the near future. Our appetite for storage is only growing, and vendors are singing the praises of flash storage -- whether server side, in disk form factors or in the shape of flash-only storage arrays. "Anybody who's anybody," we're told, is deploying flash storage, regardless of workload requirements.
Heck, we're deploying flash kit to make VMware process guest machine I/O faster, even though it's VMware's code creating the I/O chokepoint in systems; faster storage isn't likely to make guest machines run any faster.
We're mistaking spoofing for storage: the former being a practice used by NetApp (and other storage vendors) to make their piggish Write Anywhere File Layout (WAFL) and RAID scheme appear to perform acceptably well. In point of fact, NetApp front ends its network-attached disk arrays with expensive, memory-laden Flash Cache cards that acknowledge writes and let applications go on about their business while storing data in memory queues waiting to be recorded to disk.
How many times do folks deploy "faster" or "tier one" storage to deal with database speed issues caused by the improper allocation of indexes on the same spindles as block rows and columns? How many times is data laid out on improperly organized scale-out clusters, slowing access speeds and feeds? One vendor recently told me his users are increasingly storage illiterate, which makes them perfect targets for vendors who want to sell relationships rather than technological excellence.
In the final analysis, if we wanted to rightsize our storage infrastructure, align it with actual application requirements and drive cost to the bottom line, we would need to start testing data storage equipment again. The good news is that there are some testing rigs appearing in the market that can help even those lacking deep technical skills to make a pretty good go of it. I'm referring to a test rig from a little-known company called SwiftTest that I was introduced to last week.
Based on a couple of hours of research and conversations with CEO Philippe Vincent, I'm impressed with the work this firm is doing in the area of storage validation, and I encourage readers to visit the company website. Instead of collecting baseline IOmeter data, which basically black boxes the kit, SwiftTest is paying closer attention to workload generation and end-to-end testing to provide users with a better approximation of what kind of performance a specific rig with a specific interconnect to a server will deliver under a specific workload.
Now we're talking a real Yoga Pants Test for storage. It's worth a look.
About the author:
Jon William Toigo is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.