BACKGROUND IMAGE: iSTOCK/GETTY IMAGES

Storage magazine

NVMe flash storage is shaking things up

Sergey Nivens - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

The challenges of flash enterprise storage and how to beat them

Blazing fast all-flash data centers raise important issues. It's expensive to equip DR sites and then there's all that stale data. Find out how to counteract these issues.

Since all-flash arrays first came to market, IT professionals have considered the inherent possibilities in the concept of a flash-only data center. Such a data center would instantly respond not only to production databases but to unstructured data requests, archive recalls and even backup and restore tasks. The idea is appealing, but is the all-flash data center realistic?

An all-flash data center seems reasonable for several good reasons. First, flash enterprise storage has dropped in price over the last five years. Second, data reduction technologies, such as deduplication and compression, work well in a flash environment because the excess performance of flash allows these processes to run with almost no noticeable impact on users and applications. The result is the typical all-flash array can benefit from a three- to five-times increase in data efficiency.

The third reason for an all-flash data center is that flash drives and modules can deliver unprecedented densities -- 16 TB drives are common, and 50 TB and higher drives are on the near horizon. For data centers facing floor space issues, flash density can save millions of dollars in additional data center construction costs.

Probably the most important justification for flash enterprise storage is its high performance. In many data centers, all-flash eliminates the need for additional performance tuning. Most all-flash arrays deliver more performance than the typical data center needs.

Why all-flash doesn't make sense

For all of its strengths, all-flash design has flaws. Despite the decreases in flash cost and the addition of data efficiency techniques, flash enterprise storage pricing is still high compared with high-capacity HDDs. In addition, while most all-flash array vendors have the ability to replicate to a secondary site, they must replicate to another similar system running the same hardware. If the all-flash array vendor only provides all-flash systems, that means the disaster recovery (DR) site is also all-flash, and it mostly sits idle waiting for a disaster.

Potentially the biggest challenge to an all-flash data center is that as much as 80% of the data in production storage sits idle with no users accessing it for years. Having cold and stale data sit on a pricey storage system is difficult to justify.

Overcoming all-flash challenges

Given that flash enterprise storage is still priced at a premium, the two core problems mentioned above must be solved for an all-flash data center to make sense. First, an organization needs to come up with a viable DR strategy that doesn't require a flash system at the DR site. Second, it has to address the reality that most data on primary storage is old and no longer accessed.

Potentially the biggest challenge to an all-flash data center is that as much as 80% of the data in production storage sits idle with no users accessing it for years.

There are several ways to deal with the DR challenge. First, work with a storage system vendor that can replicate to a hard-disk-based DR system. In most cases, these vendors -- Dell EMC, Hitachi Vantara, IBM and Western Digital's Tegile -- first came to market with hybrid flash arrays and later transitioned to all-flash. They typically use the same storage software for hybrid and all-flash and can replicate from an all-flash primary to a hybrid secondary storage system. This configuration reduces the cost of the storage system in the DR site while delivering near-flash performance in the event of a disaster.

Another option is to use third-party software to perform the replication of data to the DR site. Most software-based DR productsZerto Virtual Replication and Carbonite's DoubleTake, for instance -- can replicate from any storage system to any storage system, so the DR site storage system could even be hard-disk-based.

Some replication software can also run in the cloud or at a managed service provider, creating a third option, cloud-based DR. These applications could replicate data to the cloud, eliminating not only the need for a storage system in the DR site but the DR site itself, because some of these replication products can start virtual instances of critical applications in the cloud.

The cloud option

One approach that addresses both the disaster recovery and the stale data issues of the all-flash data center is to go with a hybrid cloud architecture. With these designs, an all-flash appliance is put on site, and all data replicates to the cloud.

The on-premises flash appliance is essentially a cache. It caches new and updated data writes and caches the most recently accessed data reads. It then replicates all data to the cloud as quickly as possible. Once data is in the cloud, it replicates to another data center owned by the provider. Some of these offerings also provide the ability to instantiate applications as virtual machines in the cloud. The challenge with the hybrid cloud option is it that the latency between the on-premises copy of data and the cloud is relatively high, and any cache miss will be noticeable.

The latency problem is essentially a speed of light issue. No matter how fast the connection it takes a certain amount time for bits of data to reach the cloud data center, which may be thousands of miles away. A way to reduce latency is to cache data at an intermediary data center a few hundred miles away; essentially a multi-tier by-bridge cloud service.

In this design, the data center would have an all-flash appliance that stored newly written and modified data as well as most frequently accessed data. The regional cloud data center would store a copy of that data plus all inactive data. The regional cloud data center would then replicate the data to a public cloud provider for DR purposes.

Stale data and hybrid systems

Stale data is data users no longer access, but that the organization wants to or must retain for a period of time. Most organizations can't easily identify and manage this data, and it just sits on primary storage. If that primary storage is an all-flash array, then the enterprise is wasting IT dollars.

Organizations can solve the stale data problem several ways. They can use a hybrid array that automatically moves data from flash enterprise storage to hard-disk storage as the flash storage fills up. This is similar to the hybrid system that can overcome the DR bottleneck.

The concern with hybrid systems is the performance delta when there's a flash miss, and it's necessary to access data from the hard-disk tier. There's a performance drop between the two tiers, true, but the question is will the users or applications feel the impact? In most cases, the answer is no. But the concern is legitimate enough that many data centers have decided to go to all-flash systems for primary storage.

It's essential to understand hybrid system performance when retrieving data from the flash tier is nearly the same, if not identical, to an all-flash system. The key is to take advantage of the lower cost of flash storage to create a larger flash tier. When hybrid systems first came to market, flash was expensive, and the flash tier was relatively small, typically 3% to 5% of total capacity. As a result, the chance of a flash tier miss was relatively high. Now, thanks to lower flash storage prices, organizations can decide to make the flash tier 25% to 50% of capacity. Flash tiers of this size should be able to store all data accessed within the last two years. They also greatly reduce the possibility of a flash miss while still allowing an organization to benefit from more economical hard-disk-based pricing.

The second option is to combine a storage system dedicated to all-flash with an object storage system. Move data not actively accessed to the object storage system based on user-defined policies. Enterprises can set different policies for different data sets. For example, all office productivity documents might be moved to the object store 90 days after access, but images can have a policy to leave them on primary storage for a year. Using an object storage system requires data management software that can identify and move old data to the object storage system.

Selecting an application to do this identification and movement is an extra step, but it creates an excellent foundation for data management. This is the least expensive way to implement flash. Essentially, the data center is all-flash for primary storage and hard-disk-based object storage for all secondary. The data management software slows the growth of primary storage and reduces the investment in protection storage.

It's all about data management

The all-flash data center is a possibility, but it's hard to ignore the two fundamental problems it creates: the cost of equipping a DR site and all that stale data. Organizations could leave the stale data on all-flash if they're willing to absorb the cost. However, the DR issue can't be ignored. Having an all-flash array sitting mostly idle at a remote data center is a waste of money. A more realistic approach is to create a strategy where all active and near-active data is on all-flash. When users don't access data for a year, the system automatically archives it to hard disk drives.

A hybrid system that includes flash enterprise storage solves both problems, automatically moving data to hard-disk storage as it ages. A data management strategy solves the stale data problem and simplifies the DR problem. A hybrid cloud strategy is also an interesting alternative and is the only way the data center can legitimately achieve all-flash status.

Which choice makes the most sense for the data center depends on the organization. Most resist a data management strategy, finding it hard to maintain. A hybrid approach automates data management to some degree and may be more realistic than all-flash for many organizations.

Article 3 of 6
This was last published in August 2018

Dig Deeper on All-flash data center

Join the conversation

5 comments

Send me notifications when other members comment.

Please create a username to comment.

What sorts of challenges do you foresee for your organization from implementing all-flash storage in its data center?
Cancel
But my vista is that since data is of value and AFA and tier storage is required, most stale data (probably)can even go to Tape. The reason is because streaming data is inundating from all directions at higher speed and volume that what we can think of like tweets, whatsapp.....
Cancel
Hi RajaKT! Here's a response from George Crump:

“But clearly not every company has a twitter or WhatsApp use case. In fact I would say that the overwhelming majority do not. And even in those use cases not all data needs to be streamed."
Cancel
Is there an disadvantage for Flash apart from cost?
Cancel
Few disadvantages I see are:

Wear leveling, be it static or dynamic, in SSD to increase life span of drive via flash controller is an important factor. Flash controller uses a sophisticated algorithm to distribute evenly the write/erase cycles for all blocks in the SSD. SSD overprovisioning is another space-consuming issue, designed to minimize the impact of GC write amplification. Write amplification happens as the data is read and written at page level but the erase is happening at block level. Data cannot be updated directly. It must be first erased and then rewritten. Changing from FGT to Charge trap Technology is another question. Usage of compression algorithms looks appealing but can complicate matters.
Cancel

Get More Storage magazine

Access to all of our back issues View All

-ADS BY GOOGLE

SearchDisasterRecovery

SearchDataBackup

SearchConvergedInfrastructure

Close