Keeping up with solid-state storage requires some technical know-how, but sometimes flash vendors make the technology...
harder for users to understand.
You know those great Gartner graphs that show the hype cycle for IT technologies? The tech-tracking curve on the graph swings way up to indicate users’ inflated expectations, then plummets to the famed “trough of disillusionment” before inching up and settling into enlightenment and, eventually, productivity. These things always seem a little quirky, but they’re actually pretty accurate, especially for technologies that ultimately do catch on and get deployed in the real world. If you think about it, most techs go through a similar cycle.
For solid-state storage, I think we’re probably well beyond inflated expectations and hopefully no longer wallowing in the trough of disillusionment, but the arc that should deliver us to Gartner’s “slope of enlightenment” doesn’t seem to be tracking per the model. In the case of flash, there’s another valley to cross before we get to the “plateau of productivity,” and it’s shaping up to be a treacherous place.
We’ve been saying all along that the solid-state market is still a pretty techy place and you’re going to have to roll up your sleeves and dig into the technology to make the right decisions for your storage shop and company. That’s mainly because enterprise flash is evolving at a brisk pace. So the differences between a system that uses SLC flash versus one that uses MLC can be hugely important, affecting cost, performance, longevity, capacity and just about everything that’s important about storage in the first place.
But like it or not, you’ll have to keep your propeller-head beanies on for a little while longer -- at least until flash’s tech development slows a bit and solid-state storage begins to look more like a physical commodity, like spinning disks.
But don’t plan on getting much help from the vendors purveying solid-state products. Their jockeying for position in a still-crowded flash field is putting a crimp in the curve and, in turn, delaying enlightenment. Right now, solid-state storage is at the point where confusion is threatening to overtake the technology itself, and we who follow this market find ourselves teetering on the brink of the valley of vagueness.
It might’ve taken us a little while, but we finally got how the dollars-per-gigabyte thing didn’t make much sense when comparing solid-state with spinning disk. That kind of comparison was an apples-and-oranges thing, like comparing a Ford Fiesta and a Ferrari Berlinetta based on sticker price per cubic foot of trunk space.
We now know that the real bang-for-your-buck comparison is all about performance -- IOPS. Now that makes sense, and it brings the price of solid-state down from the stratosphere.
Recently, a handful of enterprising startups have taken a different tack and are pitching their products not as high-performance alternatives to hard drive systems but as 21st century high-capacity storage systems that can compete with spinning disk arrays. Boasting about their low dollars-per-gigabyte ratios, they gleefully shun the performance aspects. Confused? You should be, because there’s something wrong with that picture, right?
Part of what’s wrong is that everybody knows solid-state simply isn’t at the point where it can compete with traditional magnetic media. The technology itself is still more costly, and there just aren’t enough flash fabricators in the world to make enough of the stuff to seriously threaten the hard disk supply.
The other thing that’s questionable regarding those low, low price-per-gigabytes quotes we’re hearing about more frequently is how the vendors arrive at those numbers. Straight up, solid-state can’t compete on price with hard disks, but these vendors claim to have squeezed every bit of available capacity out of flash products using compression, dedupe, and smoke and mirrors. By scrunching the data down, they claim that on a dollar-for-dollar basis they can stretch solid-state capacity to the equivalent of much-higher-capacity HDDs. I have two issues with that. One, it’s not a fair or accurate comparison if the hard disk system isn’t receiving the same data reduction treatment as the flash. And, second, who knows how accurate those claims are? What kind of data was compressed and deduped in their tests? What’s the chance that a business will see those same reduction ratios?
I feel myself descending into the valley of vagueness … or maybe I’ve been there for a while and just didn’t realize it.
Endurance has also been a concern with flash storage, and vendors have focused their efforts on making flash last longer and behave more reliably. And they’ve done a great job. But when they found it difficult to tell their endurance story using traditional storage terms (MTBF and so on), they tried a different vocabulary. It’s common to see claims like “Our product can write more than 33 PBs of data to a 600 GB drive 30 times a day for five years.” Maybe that’s a little clearer than the higher math needed to figure out MTBF, but does that mean if I only write 1 PB of data to that SSD, it will last 165 years? I wonder what the ROI on that is?
BIO: Rich Castagna is editorial director of the Storage Media Group.