All-flash arrays are now the same price as hard disk arrays, or at least that is what AFA vendors would like you to believe. For the most part, you can prove claims of AFA price parity with HDD systems. However, there are a lot of variables in determining the exact price point per gigabyte.
To achieve their price parity claims, all-flash array vendors take advantage of the continuing drop in NAND flash pricing and the increasing availability of data efficiency technologies like deduplication and compression.
Lower flash pricing, but there is a catch
The raw price per gigabyte of flash is declining rapidly due to an increase in production and density per flash cell. When flash was first delivered to the enterprise, single-level cell flash -- the ability to write one bit of data per cell -- was the standard. Then multi-level cell flash -- two bits per cell -- became the norm, and today, almost all flash arrays leverage MLC NAND. In the last six months, we've seen the adoption of triple-level cell -- three bits per cell -- in enterprise-class AFAs. With each increase in density, all-flash array pricing came closer to reaching HDD price parity.
The downside to the density increase is the decrease in flash durability. Basically, the flash modules wear out faster the more densely the drive packs bits onto it. However, an AFA largely mitigates the impact of faster wear out. Most AFAs have redundant components, so if a NAND wears out there are other NANDs available to take its place. The majority of all-flash array vendors overprovision the drives, so they may only present 75% of the capacity of the drive to the storage system. Overprovisioning allows fewer writes across more NAND cells. While sacrificing some capacity, it does extend the apparent life of the drive.
When dealing with raw capacity, the math calculations for flash pricing are relatively easy. The only variable is the amount of flash allocation for redundancy. The problem, at least at this point, is that AFAs cannot reach price parity with HDD arrays when compared solely on a raw price-per-gigabyte calculation. As a result, almost every AFA vendor leverages data efficiency techniques to fulfill the price parity claim.
The fuzzy math of data efficiency
Data efficiency techniques like thin provisioning, data compression and deduplication are now standard in most AFAs. Vendors consider them safe for organizations to use and, in most cases, the extra performance of an all-flash array can deliver the data efficiency benefit without a noticeable impact to performance.
Thin provisioning eliminates the need to allocate storage capacity until it is actually needed. Without thin provisioning, an organization would have to allocate fixed amounts of capacity based on the demands of an application, and it actually saves capacity by not having it imprisoned to a particular server or virtual machine.
Compression removes redundant data within a file, while deduplication removes redundant data across files. The effectiveness of these techniques is largely dependent on the similarity of the data an organization has in its primary data set.
Most vendors claim a 5:1 data efficiency rate when using a combination of deduplication and compression. This means an organization with 50 TB of data may only need 10 TB of capacity to store that data. Again, each organization's data set is different -- some will see a much higher rate of efficiency where others may see a much lower one.
Generally speaking, virtualized desktop and server environments will see more benefit from deduplication, while database environments will see a greater benefit from compression. But calculating an exact number is very difficult, if not impossible. Finally, some vendor's deduplication algorithms are marginally more efficient than others.
How much capacity to buy
The problem with making an AFA purchase based on the fuzzy math of data efficiency is the vagueness in flash pricing -- there is no way to know the actual price per gigabyte. There are three schools of thought on how to purchase.
- Take a very conservative approach and buy with the assumption that there will be no gains in capacity from data efficiency. For almost all data centers, this approach will result in the purchase of too much capacity. This is particularly problematic in the world of flash, which has dramatic price declines and technology advancements.
- Assume a 2.5 times data efficiency gain. While this will still likely provide the enterprise with excess capacity, it should not be as severe as the above approach.
- Take the vendor at their word. This is typically not a good idea, but in this instance it may be a worthwhile strategy. Several vendors now provide data efficiency guarantees with their systems. If for some reason the vendor does not reach the agreed to efficiency ratio, the vendor will provide extra capacity.
Trying to predetermine the actual price per gigabyte of an all-flash array is almost impossible if the vendor leverages data deduplication and compression with their system. IT professionals should either compare raw flash pricing between vendors to get an apples-to-apples price comparison or apply an across the board ratio to those systems. They should also look for vendors that provide a data efficiency guarantee with their systems to remove the pressure of getting the capacity calculation perfectly correct.
Sinking flash pricing factors into tiering or caching choice
Tighter chip supply caused flash pricing drop slowdown
How to determine actual all-flash array pricing