Many storage-networking vendors have a tendency to over-hype or exaggerate the value proposition of their products. Hype is nothing new and has been around since humans first attempted to sell products. The terms "snake-oil-salesman" and "carnival barker" are 19th century descriptions of people whom exaggerated or over-hyped products. "Best thing since sliced bread" and "it will even make your coffee in the morning" are 20th century examples of over-hype.
Typical storage/storage networking over-hyped 21st century terms include "intelligent," implying very smart, adaptable, expert or artificially intelligent (as in intelligent switches, intelligent directors, intelligent adapters, intelligent controllers, etc.); "automated," implying no human intervention or expertise is required (as in automated policies, automated
Unfortunately, there are no silver bullets or proven ways for IT managers to do this easily. The products and solutions do not come with a user guide disclaimer proclaiming: "Forget what you heard from the sales team; this is the real truth and what they should have told you." IT administrators have consistently expressed to me their continuing frustration with over-hype and the misuse of language. Words seem to have different meanings. It becomes far more confusing when attempting to compare different vendors with competing over-hyped claims. And when an established vendor uses fear, uncertainty and doubt (FUD) about competing startups, it's time to get out the antacid.
There is a methodology you can use to deal with hype and get to the value that matters. It requires an objective tool, and you have to borrow Missouri's "Show Me" motto.
The objective tool starts with a clear and concise understanding of what you and your organization want to accomplish with the technology. Don't make the assumption that you already know; even if you do it does not matter. Be sure to clearly articulate this in writing. This is not a technology issue; it is a business or end-results issue and puts all of the parties on the same page. From there, work backwards to define everything that is required to provide those end results -- these will be your requirements. Make sure to include organizational and political requirements, such as financial stability, number of production installations, number of like references, etc. Rank each and every criterion and weigh them in importance. Determine which are critical (can't live without) and which ones are "nice to have's." Put all of them in a spreadsheet or matrix.
This objective tool takes much of the emotion (including frustration) out of evaluations and levels the playing field for the competing vendors. It cuts to the chase. Vendors can espouse all they want on how much more intelligent their product is than anyone else's. That's fine. But does it provide the specified IOPS, data protection, throughput, scalability, reliability, MTBF, MTTR, price/performance, service offerings and references? If they do not meet the required specifications, they do not win the business.Step two sets the groundwork to provide quantifiable verification of vendor claims. If a vendor claims a billion IOPS, exabytes of capacity, terabytes per second of throughput, require them to put their claims in writing. More often than not, there will be the usual caveat of: "Your mileage may vary." This is when it becomes a bit dicey. Demand they provide guaranteed quantifiable and verifiable numbers in writing for your specific environment and applications. The end result is most likely to be a pretty wide range. Whatever it is, only use the lowest numbers.
The third step is the most important. The most leverage any customer has is before they sign the contract and issue the purchase order for the storage/storage networking hardware, software or service. This is the time to do a trial in a controlled environment that is as close to the production one as possible. Make the vendors prove all of their claims in the trial. Odds are if it does not work in the trial, it will not work in production. Even if it works in the trial, it still may not work as advertised in production. The more similar the trial environment is to the production one, the more likely the results will be repeated.
Granted, all of this is a lot of work. In the end, it will be well worth the effort.
About the author: Marc Staimer is president and founder of Dragon Slayer Consulting in Beaverton, Oregon. He is widely known as one of the leading storage market analysts in the network storage and storage management industries. His consulting practice of six plus years provides consulting to the end-user and vendor communities.