Tiered data storage: State of the art

Tiered data storage is now a storage best practice, accelerated by the use of solid-state storage. We survey how major vendors leverage solid-state to implement storage tiering.

Tiered data storage has quickly become a storage best practice, accelerated by the use of solid-state storage. We survey how the major vendors leverage solid-state to implement effective storage tiering.

Technology developments follow a predictable evolution. New functionality starts as an "exclusive" competitive feature offered by one or just a few companies, followed by vigorous industry competition of highly differentiated offerings, and finally inclusion in the "baseline" feature set of most products. Storage tiering, and more specifically automated storage tiering, has become a baseline element. Even so, significant differentiation gives storage managers a mouth-watering choice when it comes to evaluating competing solutions. This differentiation is particularly important to those organizations that seek best-of-breed products, where tiered data storage is a significant requirement.

All tiering offerings have certain things in common. First, and at a minimum, the array hosts multiple physical media types, usually including solid-state drives, high-performance disk (either Fibre Channel or SAS) and high-capacity disk, with plenty of permutations of those basic components. Second, systems include software that embodies rules and methods for moving data from one physical tier or media type to another. Even though these features are common at a base functional level, there's enormous variation in the way they're implemented.

Solid-state storage drives tiering

A key technical driver for tiering adoption has been solid-state storage or solid-state drives (SSDs). Early tiering efforts around Tier 1 (Fibre Channel), Tier 2 (SAS) and Tier 3 (SATA) failed because organizations couldn't accurately provision for hot versus cold data. Thus, many tiered arrays remained 80% Tier 1 to ensure adequate performance. The marginal cost savings of the remaining 20% didn't justify the added complexity and effort. SSD has been a game-changer in that it delivers huge IOPS performance gains in a very small footprint (albeit an expensive one). At this point, nearly all storage vendors agree that best-practice architectures include a small percentage of solid-state storage accompanied by high-capacity hard disk drives (HDDs), resulting in far fewer spindles. The aggregate throughput is often higher with a lower acquisition cost.

For the purposes of this discussion, we'll draw a distinction between SSD and flash cache, though the technology is essentially the same. SSD can be thought of a distinct Tier 0, available for application provisioning as with any other storage media. Flash cache is general purpose in nature, enhancing the entire array. Most vendors support both types, and a majority also support a "hybrid pool" in which LUNs may consist of both SSD and various types of HDDs.

How vendors leverage flash for tiering

EMC Corp. recommends a "flash first" approach when introducing solid-state storage into an environment. On the company's VNX line of arrays, the product is called Fully Automated Storage Tiering (FAST) Cache. It's different from DRAM cache and actually functions between DRAM and the HDDs. The company has found that as little as 5% of the total storage capacity in the form of FAST Cache can yield between 300% and 600% overall performance improvement. Moreover, they've found that a 5% slice of flash permits a spindle count reduction of two-thirds when substituting SATA for Fibre Channel. The result is better performance, lower acquisition cost and lower operational costs -- what EMC calls the "triple play of storage."

NetApp Inc. prescribes three locations for flash. The first is at the host level, using its Flash Accel product. The second is NetApp's Flash Cache, which is located in the storage controller. NetApp's third approach consists of flash pools, or hybrid aggregates. This last is a Tier-0 implementation that can be directed to specific applications. NetApp's approach differs from EMC's in that they recommend a bottom-up approach -- start with storage cache and work your way up as additional performance is required. Nevertheless, they don't recommend replacing flash pools with flash cache. When it comes to tuning the flash, NetApp looks for a 90% cache hit rate as optimal. If the hit rate is relatively low, say 50%, it can indicate an insufficient amount of cache. When multiple layers of cache are implemented, the highest layer (the one closest to the server) will serve the necessary I/Os first. The fundamental philosophy here is to store the data in the lowest, cheapest devices and allow the system to elevate data to the appropriate storage tier to meet performance requirements.

Hewlett-Packard (HP) Co., with its Ibrix series of scale-out NAS systems, takes a more traditional approach to tiering. In these arrays, SSD functions as cache and storage managers can implement physical tiers consisting of Fibre Channel, SAS and SATA HDDs. HP's enterprise 3PAR arrays use "sub-LUN" tiering in its Adaptive Optimization tiering solution. Sub-LUN tiering is essentially hybrid LUNs that exploit the performance of SSDs. These hybrid LUNs can include up to three physical tiers.

Both Hitachi Data Systems and EMC take physical tiering one step further in their Virtual Storage Platform (VSP) and VMAX systems, respectively. Both arrays are capable of including third-party arrays as tiers in their system architectures. (NetApp can also virtualize third-party systems behind its V-Series controllers.) EMC refers to this as "tier 4" in its Federated Tiered Storage offering. This expands EMC's FAST capabilities, which encompasses both SSD and numerous HDD options. The EMC VMAX group recommends what it describes as the "80/20 I/O skew rule" to size cache. This rule assumes that at any given time, only 20% of volumes are "hot." Within that 20% of hot volumes, only 20% of the data is hot. This equates to 4% of total data, which is the recommended place to start when sizing array SSDs. Interestingly, all the various methods used by vendors as sizing guidelines arrive at close to the 5% of capacity guideline. If so many agree, there must be some validity to it.

Hitachi, while offering flash, SSD and third-party tiers, suggests a more top-down approach to using flash to improve performance. Hitachi's Dynamic Tiering strategy is based on the assumption that new data is usually the hot data. Therefore, it moves new data into flash initially and only moves it to a lower tier when hotter data displaces it. It's worth noting that both Hitachi and EMC extend their tiered data storage offerings to mainframe environments as well.

Under the covers of storage tiering

Although vendor hardware architectures differ as described, the underlying drives and boards are often quite similar. Hitachi, however, has specialized ASICs and processors that it refers to as hybrid control units. The ASICs are used as data movers, while the quad-core Intel processors track metadata. The philosophy is to move as much of the workload as possible into the hardware layer for maximum performance.

What is state-of-the-art storage tiering?

  • Tiered storage strategies must encompass at least three drive types, including solid-state storage.
  • Flash memory is an integral part of the offering.
  • Sophisticated algorithms identify "hot" data and move it automatically to the appropriate tier.
  • Storage arrays can simultaneously be optimized for cost and performance.
  • Optimization decisions are largely automated to minimize administrative intervention.

The automated data moving software that has made tiering a practical solution is the most significant differentiation, and it's where the "art" comes into "state of the art." For example, Hitachi combines its hardware architecture with an object-based file system to track metadata, which it finds to be the most efficient process. Data movement is based on policies and usage characteristics. Data is moved in 42 MB pages, which fit neatly into cache sizes. Hitachi uses a "set and forget" philosophy to minimize manual effort, but data can be manually promoted to higher tiers in cases where usage can be predicted. An example of this would be month-end processing of certain specific data sets.

When and why data gets moved

Storage managers might assume that hot data is unpredictable and likely to occur at any time and therefore data movement should be frequent. Most data movement schemes occur in a matter of hours and can take up to a day, meaning that data movement between tiers is more trend-based than an immediate reaction to conditions. Because of this consideration, HP suggests cache as the best technology for reacting to immediate, unpredictable I/O bursts. If unpredictability is high, then IT managers might want to beef up on cache rather than hybrid pools.

When to move data is an important aspect of tuning the systems appropriately. EMC's VNX default data movement cycle is once a day, though users can set policies for more frequent moves. HP's Ibrix systems move data on a daily basis as well, but can even move data hourly. Data is moved based on scans of data segments for metadata that has become hot. Although scans of segments can be run in parallel, the company advises that too many scan jobs can unproductively consume back-end IOPS. Their 3PAR arrays are capable of "non-disruptive" data movement (which is actually self-throttling that's transparent to the host and application) and data "heat" sampling can be as often as every 30 minutes. Even so, HP still recommends limiting data movement only to the frequency necessary.

At the other end of the spectrum, both the EMC VMAX and NetApp systems are designed for frequent data movement. VMAX moves 768 KB data extents and NetApp does so in 4 KB blocks. Because the number of I/Os needed to move such small amounts of data is also very low, the disruption in the grand scheme of things will be minimal. In addition, EMC permits data to be "pinned" to cache, moved manually or scheduled for specific windows, i.e., between midnight and 2:00 a.m.

Data types that tier best

When matching storage tiering with use cases, nearly all vendors point to virtual desktop infrastructure (VDI) and virtual server environments. In virtual environments with shared storage, NetApp recommends doubling the amount of cache that one would otherwise allocate. The EMC VNX group describes the best use cases as "skewed data sets" where a subset of data is hot at any given time. In addition to VDI, this might include online transaction processing (OLTP) applications. Web-based file serving is another good target, as certain pages may be hit repeatedly compared to others.

Tiered data storage strategies that include SSD or other forms of flash for optimum performance at the lowest aggregate price will only become more robust. Though it's now a baseline function in most storage arrays, tiering is currently one of the key technology considerations for storage managers. Because solid-state technology is fundamentally the same as server memory, it follows Moore's law price/performance curve; cost per IOPS will fall significantly in the coming years.

About the author:
Phil Goodwin is a storage consultant and freelance writer.

 

 

This was first published in December 2012

Dig deeper on Tiered storage

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close