This content is part of the Essential Guide: Flash options for the array or server
Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

All-flash storage arrays: Performance vs. function

George Crump offers criteria to help IT pros decide whether performance or function is most important when choosing all-flash storage arrays.

Nearly every storage vendor now offers all-flash storage arrays, and IT professionals are beginning to recognize the need for these high-performance storage systems. But how does an IT pro decide which of the many all-flash arrays are best suited for their organization and performance demands?

Performance vs. function

As the all-flash storage array market begins to mature, there are two categories of arrays emerging.

The first are all-flash arrays that were designed from the ground up to be all-flash arrays. They typically have optimized hardware designs that focus on extracting the maximum possible performance from the flash within the array. The vendors in this space are almost all emerging technology companies or startups. In most cases, their focus on hardware and performance is at the expense of storage software services. These are the features that many storage administrators now count on to do their jobs, providing capabilities like snapshots, replication and cloning.

These arrays are known for generating millions of IOPS per system. However, there really is no established method for how those high IOPS numbers are obtained. They can be generated from a single workload or multiple workloads accessing the system at the same time.

The other category is made up of all-flash arrays that are more feature-oriented. These are typically systems from established vendors, as well as a few startups, that choose to focus on the software functionality (providing a feature-rich' experience), often at the expense of maximum performance. Typically, these systems either use legacy hardware from the established vendor and retrofit their old arrays with solid-state drives (SSD) or, in the case of a startup, use off-the-shelf hardware to keep costs down.

These systems can often generate 200-400k IOPS per system. Some scale-out, software-rich systems will claim an aggregate performance of millions of IOPS as well but, as mentioned above, the devil is in the details. They typically have a performance limit per volume or per node within the scale-out cluster. This means they can scale to millions of IOPS like the performance-focused systems described above, but it takes many nodes to get there and to see that extreme performance requires multiple workloads all running concurrently. A scale-out system cannot deliver millions of IOPS to a single workload or thread.

Which is best?

We are often asked which method is best. The answer, as usual, depends on the needs of the data center and the specific applications that are running. Most data centers, while performance-constrained, are not constrained to the point that they will typically exceed the baseline performance of a feature-rich all-flash array. Also, most organizations will take great comfort in the availability of the feature sets they have become accustomed to from legacy hard disk arrays.

There are environments with a need for more than a half million IOPS, but it's how those IOPS are needed that will help determine the best system for a particular data center. If the need for performance is distributed across more than a few workloads, the all-flash systems that can provide scale-out linear performance growth are ideal.

If the environment has a single workload that needs more than half a million IOPS, then the performance-focused systems are needed. As stated above, these systems can provide millions of IOPS to a single workload.

Middle ground?

Is there room in the middle? Does a storage system exist that can meet the needs of a performance-demanding workload, yet still provide the feature-rich environment that more traditional applications require? There are several vendors that provide this class of solution. This type of system must be designed first as a performance-focused system, then have software added to it. While the addition of that software will add some latency, it will not impact most applications. These systems typically have performance to spare.

This software can be added in several ways. Some vendors provide an appliance that the performance-focused system can be connected into, allowing it to take advantage of all the features that the appliance can provide. This storage virtualization approach also allows the all-flash array to be somewhat integrated from a software services perspective.

Other vendors have the ability to load storage software onto a co-processor within the flash array itself. This provides a tighter integration experience and saves the cost of an external appliance.

Finally, all of these hardware-focused systems could work with any of the software-defined storage solutions that are on the market today, including those converged solutions that run within the hypervisor architecture. The key, though, is to make sure that that software-defined solution can support external, shared storage (not all do).

While combining a hardware-focused solution with either an appliance or hypervisor that delivers the storage services, it's key to remember there remains one big challenge. That hardware-focused flash solution must be delivered at a price point (including software) that is in the same range as the feature-rich solutions described above. In most cases, the feature-rich solutions are still the most cost-effective, and again, 400k IOPS is more than enough for most data centers.

All-flash arrays are becoming mainstream. Many vendors in the space claim price parity with "performance-focused" hard drive arrays. These would be arrays from name-brand vendors that are using 15K RPM drives. This claim is generally true, so any data center looking to buy a performance-focused disk array should be seriously considering an all-flash array.

The choice within the all-flash segment is largely dependent on what the needs of the data center are. For most data centers, the feature-rich solutions will be all they need. But it may be worth the investigative step to confirm that and to then determine if they need a scale-up or scale-out system.

This was last published in April 2014

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

I think the performance vs functionality trade-off is a short-term issue. All of the start-up vendors are aggressively working on replication, snapshots, dedupe, HA, cloning, etc. In fact, most of them will claim they already support nearly all of the critical features that a storage manager would need. The more interesting question is what is the performance impact of using some of these features? For example, turning on compression and inline deduplication can cut overall performance by 50% or more in some products, which can dramatically change the ROI of all-flash arrays. Storage architects looking to evaluate flash and hybrid arrays should be using products like Load Dynamix to make vendor purchasing and deployment decisions. They offer an I/O workload modeling and storage performance validation solution that accurately reflects your real-world application workloads. This will allow you to see the performance characteristics of any system that you choose to evaluate under your specific workloads. You can easily see the impact of compression and dedupe. Use Load Dynamix to keep your vendors honest. Trust, but verify!