luchschen_shutter - Fotolia
There's no denying that some applications with a relentless need for IOPS will require all-flash arrays. But most of your workloads will do just fine with well-designed hybrids.
Over the past three years the choices for flash-based storage products have skyrocketed. We now have PCI Express card and disk form-factor products for the server, caching devices that can be installed in the network, standard disk form-factor solid-state drives for traditional array architectures, caching in front of a traditional array and, of course, 100% flash-based arrays.
No storage array vendor wants to be left out of the "flash revolution." That's hardly a surprise. Flash technology is a godsend and its timing couldn't be better given the magnitude of the data to be stored, accessed, moved and analyzed in this era of Web 2.0 and log-data-spewing machines.
But with variety comes the difficult part of choosing the right flash implementation for the job. This is further complicated by every vendor jockeying for leadership in this lucrative area and performance, endurance and other claims are often exaggerated, to say the least. So what's an overworked storage manager to do? It's a big topic that would require extensive discussion, so we'll focus on all-flash arrays and what kinds of application requirements they might be appropriate for. We'll look at other product alternatives and their implementation considerations in future columns.
The QoS issue with all-flash arrays
Recently, I've seen literature from some all-flash vendors claiming that the only way you'll be able to guarantee quality of service (QoS) for an application is if you use an all-flash array. The thesis is that performance can be controlled independently of capacity level, and no adjacent application can ever cause a QoS issue for the application in question. When viewed strictly at the computer science level, the statement is hard to argue with. Since access to any part of flash is independent of whatever is happening elsewhere, the IOPS (or throughput or latency) delivered to an application will be a constant, assuming correct sizing of the array for the application. Granted, there are some assumptions that latency and performance remain constant over time -- that was indeed an issue in early flash systems -- but most products have solved this problem. So the claim that QoS with no ifs, ands or buts can only be delivered by a 100% flash array is correct.
However, in most application environments, excellent QoS can be delivered by hybrid systems that use both flash and hard disk drives (HDDs), as long as the design considered the QoS requirements from the outset. Mind you, this isn't true for all currently available hybrid systems. The best QoS control is realized with systems designed from scratch with flash and QoS in mind. Contrast this with systems where a flash drive is added simply as a replacement for a hard disk drive. Well-designed hybrids can handle 80%-plus of the applications we normally deal with on a day-to-day basis, so they can be excellent and lower-cost alternatives to all-flash arrays.
Of course, other system design principles still apply. In other words, you can't run 10 I/O-intensive applications as virtual machines and expect acceptable QoS for each one if the total IOPS delivered under the best circumstances fall short of that requirement. But that's also true for an all-flash array. Good design requires balancing the needs of the applications with the capabilities of the array, regardless of the physical configuration and media used. And if these principles are applied thoughtfully, you may not need an all-flash array.
Does dedupe level the $/GB playing field?
Some all-flash array vendors claim their systems are so well designed -- with built-in dedupe, for example -- that their effective pricing per gigabyte of capacity is the same or lower than HDD-based systems. Their argument: Why bother with hybrids when you can get an all-flash system at the same price? However, based on what I've seen so far, some all-flash systems do come close, but all-HDD or hybrid arrays are still less expensive than all-flash arrays, even with dedupe and compression assumptions figured in. And be mindful of those all-flash arrays where performance is compromised when dedupe and/or compression is turned on; remember, you're considering an all-flash array for its performance and anything that gets in the way of that is heresy. The other factor to keep in mind is the type and sophistication of storage services available with the array. It's all those features that made the array useful in the first place.
The key point is to understand that most of your workloads will do just fine with well-designed hybrids. But there's no denying that when the application needs an all-flash array it needs it, so reserve them for those special purposes, at least for now. Typically, applications that are candidates for all-flash arrays are relentless in their need for IOPS without any periods of inactivity or reduction in system resource requirements.
It's easy to see a future where flash becomes pervasive. Some think HDDs will disappear completely. Even if that were to come to pass, it won't happen anytime soon. Pragmatically, we need to use flash intelligently alongside HDD-based systems for the foreseeable future.
About the author:
Arun Taneja is founder and president at Taneja Group, an analyst and consulting group focused on storage and storage-centric server technologies.
- A Computer Weekly Buyer's Guide to Next-Generation Storage –ComputerWeekly.com
- SSD: Features, Functions and FAQ –SearchStorage.com
- CW ASEAN: Unlock flash opportunities –ComputerWeekly.com
- CW ASEAN October 2016 –SearchStorage.com