luchschen_shutter - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Commodity storage has its place, but an all-flash architecture thrills

Some IT folks are trying to leverage commodity servers and disks with software-implemented storage services. But others want an all-flash architecture.

This article can also be found in the Premium Editorial Download: Storage magazine: The state of flash storage technology:

Every day we hear of budget-savvy IT folks attempting to leverage commodity servers and disks by layering on software-implemented storage services. But at the same time, and at some of the same datacenters, highly optimized flash-fueled acceleration technologies are racing in with competitive performance and compelling price comparisons. Architecting IT infrastructure to balance cost vs. capability has never been easy, but the potential differences and tradeoffs in these storage approaches are approaching extremes. It's easy to wonder: Is storage going commodity or custom?

One of the drivers for these trends has been with us since the beginning of computing: Moore's famous law is still delivering ever-increasing CPU power. Today, we see the current glut of CPU muscle being recovered and applied to power up increasingly virtualized and software-implemented capabilities. Last year, for example, the venerable EMC VNX line touted a multi-year effort toward making its controllers operate "multi-core," which is to say they're now able to take advantage of plentiful CPU power with new software-based features. This trend also shows up in the current vendor race to roll out deduplication. Even if software-based dedupe requires significant processing, cheap extra compute is enabling wider adoption.

In cloud and object storage, economics trump absolute performance with capacity-oriented and software-implemented architectures popping up everywhere. Still, competitive latency matters for many workloads. When performance is one of the top requirements, optimized solutions that leverage specialized firmware and hardware have an engineered advantage.

The race to all-flash first place

For maximum performance, storage architects are shifting rapidly toward enterprise-featured solid-state solutions. Among vendors, the race is on to build and offer the best all-flash solution.

Simply adding flash to a traditional storage array, or even adding flash cache to the controllers and networking layers, certainly speeds them up. Clever approaches to tiering and caching in hybrid solutions can bring flash-level performance to a wide set of workloads, stretching the flash investment broadly. But to truly maximize IOPS and minimize latency across all I/O, vendors need to build solutions that are fully integrated and built specifically for the solid-state world -- in other words, an all-flash and "flash-first" solution.

Solutions purpose-built for flash can still specialize in different ways. From a recent multivendor discussion hosted at Taneja Group, we learned EMC's XtremIO was built for performance and scale, as well as consistency in performance over time, which are keys for real-world production workloads. Pure Storage was designed for performance and cost efficiency with its effective leverage of consumer-grade flash. And Kaminario -- posting impressive Storage Performance Council benchmarks -- aimed at checking off every flash storage requirement with a scale up and out, variable block size architecture.

As the exception to the rule of having to build something new just for flash, Hewlett-Packard introduced an all-flash 3PAR version it claims competes in the top-end flash-first category based on its custom ASIC presciently designed for any speed of media, flash included. In contrast, Violin Memory started by building its own flash modules, eschewing the constraining hard drive emulating solid-state drive format and interfaces, and is now working back toward layering on enterprise features.

Common to this all-flash architecture is an end-to-end, optimized integrated solution in which each component is purposefully composed for flash I/O. Specific designs include firmware/hardware in the form of chips, cards, modules, caches, controllers, chassis and, of course, layers of software. However, many designs leverage some commodity sourcing of components. This leaves plenty of room yet to push things to the wall, if one only takes the time (and investment) to build every part of the stack top-to-bottom for performance.

One potential game-changing solution might soon come from Avalanche Technology, which is readying a complete top-to-bottom array purpose-built for solid-state storage. Like Violin, the company is working from chips and also promises full enterprise features built-in from the ground up through tightly integrated software. It claims the impending NAND flash version of its array has the potential to drop latency by 40% and decrease the storage footprint by half compared to other all-flash solutions.

When flash will seem slow

All-flash served by a flash-optimized solution is winning the performance race today, but on the horizon we see the inevitable next generation of competition. For starters, there are new solid-state technologies in development that will exceed NAND flash (spin-transfer torque MRAM, HP Memristor and so on) in several dimensions, not the least of which is performance. These technologies are so much faster than NAND flash they may require many of today's all-flash array vendors to go back to the drawing board in order to compete.

Second, if these speedy memory solutions are persistent and meld with storage, we expect game-changing transformations in computing itself. This evolution won't happen overnight, but we already see small Internet-of-Things systems being designed with persistent memory instead of memory plus storage. Storage may just come to converge completely with servers … or is it the other way around?

Given the current moves toward scale-out big data and cloud platforms, and consolidated data center core performance, I don't know if it's time in the broader IT cycle for compute to move back to the edges or if the network again becomes the computer or if we'll see a "black hole" convergence onto a new style of mainframe. I do know that when it comes to performance, optimized architecture and design always matter.

About the author:
Mike Matchett is a senior analyst and consultant at Taneja Group.

This was last published in September 2014

Dig Deeper on All-flash arrays

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close