imtmphoto - Fotolia
At Taneja Group, we're often told by hybrid and all-flash array vendors that their particular total cost of ownership (TCO) is effectively lower than the other guy's. We've even heard vendors claim that by taking certain particulars into account, the per-gigabyte price of their flash solution is lower than that of spinning disk. Individually, the arguments sound compelling; but stack them side by side and you quickly run into apples-and-oranges issues.
Storage has a lot of factors that should be profiled and evaluated such as IOPS, latency, bandwidth, protection, reliability, consistency and so on, and these must match up with client workloads with unique read/write mixes, burstiness, data sizes, metadata overhead and quality of service/service-level agreement requirements. Standard benchmarks may be interesting, but the best way to evaluate storage is to test it under your particular production workloads; a sophisticated load gen and modeling tool like that from Load DynamiX can help with that process.
But as analysts, when we try to make industry-level evaluations hoping to compare apples to apples, we run into a host of half-hidden factors we'd like to see made explicitly transparent if not standardized across the industry. Let's take a closer look.
Flash is faster
Byte for byte, flash provides better native performance than traditional storage. If extra spinning disks are used currently for short-stroking, then it's fair to start a flash TCO comparison against that inflated hard disk drive (HDD) Capex. And all the wasted HDD capacity should be discounted in the equation too. Today, flash can obviously improve on the density, footprint, power and other attributes in performance-challenged scenarios.
But the price argument we're hearing is about capacity -- $/GB. Apparently, there aren't enough performance-hungry, cost-is-no-object applications out there to keep all the flash vendors occupied. With the cost of flash itself dropping, and the bulk of enterprise data doing just fine being served at traditional storage speeds (for the moment at least), the argument turns to cost/capacity comparisons. Could flash now even start replacing hard drives in more than the performance storage tier?
Flash cost/capacity factors
When flash vendors compare their wares to each other's, they often start with flaming about eMLC vs. cMLC vs. any other kind of xLC. If a certain type of flash is more expensive, it's usually because of reliability issues. Violin Memory (and soon Avalanche Technology) builds its own flash components up from chips to ensure top performance and reliability all the way down the stack. However, you'll find vendors like Pure Storage arguing that their overall design allows for the use of cheaper consumer-grade solid-state drives, leading to a lower effective capacity cost. Be sure to consider the long-term costs: What's the expected lifetime of the array? Is there an upgrade path? Are there guarantees and, if so, for how long and how much?
Next, we're often presented with some low-level features relating to capacity optimization. For example, it's possible to avoid reading and writing certain common I/O patterns like a page full of zeros, by essentially having your storage operating system simply point to a virtual zero page. For any storage system, not actually writing zero pages can save a ton of capacity footprint. And avoiding those zero writes in flash means a longer lifetime for the media.
A NAND-based flash cell can wear out if written to repeatedly. There are many schemes for wear leveling, including moving writes around the available free space and reserving space ahead of time to replace worn out bits. Obviously, wear leveling effectiveness plays into the expected lifetime calculation. There's certainly room for competition; for example, Hewlett-Packard (HP) touts its Adaptive Sparing feature that can net back 20% of system-reserved flash capacity for active use.
We also need to consider the impact on capacity due to the chosen data protection scheme. Across flash-based devices we see variations on replication, erasure coding or RAID, each with varying factors of data redundancy that eat into available capacity, just as with HDD storage. There's nothing surprising here, except it's easier to distribute data across flash nodes without worrying about rotational disk delays.
We need to look at thin provisioning features, specifically how deep and effectively implemented those capabilities are. Do they include thin snaps, thin clones (or active snaps), thin replicates and/or thin copies? Keeping a volume thin throughout its lifecycle, and enabling conversion from thick to thin, saves a lot of space. For highly clone-able workloads like virtual desktop infrastructure, thin clones multiply the effective capacity of an array by orders of magnitude.
Consider the big reducers: deduplication and compression. While not every workload can be deduplicated effectively, most non-database workloads will dedupe at decent ratios anywhere from 2:1 to 10:1 or more. Of course, your actual mileage will vary greatly. And there are varying levels of dedupe efficiency and consistency, depending on how well the vendor makes judicious use of excess CPU or leverages custom firmware. EMC's XtremIO globally dedupes all I/O inline organically, while HP's StoreServ leverages its embedded 3PAR ASIC workload by workload. IBM's FlashSystem comes with real-time compression. And upstart Kaminario offers both global inline "selectable" dedupe and compression.
You might question whether or not flash is needed to implement some of these technologies, as these features are generally implemented in the controller, not on the media. In general, newer generation flash-ready storage architectures are intentionally built around these features to apply them at flash speeds. Because of the speed-up from flash underneath, optimizations like dedupe and compression can be done inline, and thus can be applied to all workload I/O that can make use of them.
Is flash the answer to everything?
Thorough $/GB comparisons should include Opex, with flash showing big reductions in footprint, power/cooling, and even justifications for lower management and admin costs.
There are other cost factors as well, like being able to start small and scale up as needed, additional storage software costs, investment protection/future-proofing and management overhead reductions.
Even so, we think an ideal pricing comparison should be based on $/application not $/GB, multiplied by factors accounting for the business benefits recognized from increased performance and service consistency. But bean counters being what they are, we're probably stuck with making $/GB justifications.
Depending on your traditional storage baseline, many of today's flash solutions are looking quite cost/capacity favorable. Some all-flash vendors are already claiming they can slide in well under $2/GB, with hybrids like Nimble and Tegile even lower. And if flash at that price works under your workloads, you might be fully migrating to flash faster than you thought possible.
About the author:
Mike Matchett is a senior analyst and consultant at Taneja Group.
- SSD: Features, Functions and FAQ –SearchStorage.com
- Pros and Cons of PCI Express SSD –SearchStorage.com
- Essential Guide to Solid-State Storage Implementation –SearchStorage.com
- Best Practices for Deploying SSD –SearchStorage.com