Storage

Managing and protecting all enterprise data

kentoh - Fotolia

Caching vs. tiering: Comparing storage optimization techniques

There are many factors to consider when choosing between caching and tiering storage techniques to optimize storage. Knowing how and when to use either can make a big difference.

The difference between the fastest and slowest server storage is huge. Response time for an Ethernet-connected iSCSI SAN system can be six or seven times that of RAM or 3D XPoint. What takes nanoseconds for RAM can take less than a microsecond for 3D XPoint or PCI Express nonvolatile memory express SSDs, milliseconds for SAN storage and as much as 100 milliseconds for disk-based SAN or NAS systems. From the fastest to the slowest, that's a factor of millions of times.

Given that disparity, tiering or caching storage optimization techniques can make an enormous difference in an enterprise's active data performance. Deciding which to use is more complicated than simply picking one or the other, however.

For instance, with tiering, new storage systems and data management methods have made it more complicated than just dividing data into hot, warm and cold tiers. And with caching, the approach -- write-around, write-through or write-back and server-side or appliance -- that's right for your systems depends on several factors, including the level of reliability and performance you're trying to achieve. Understanding the difference between tiering and caching, and knowing how and where to use one versus the other of these storage optimization techniques is critical to maintaining enterprise-level performance.

Caching options

There are two main types of cache: server-side and appliances that sit between the server and the permanent data store.

Server-side caching: You can install caching at several points in a server. Some caching software uses a portion of server RAM as cache, providing the fastest speed possible as data moves over the memory bus rather than the PCI bus. RAM is thousands times faster than even the fastest PCI Express (PCIe) nonvolatile memory express (NVMe) storage.

Next are DIMM-based products such as 3D XPoint, developed jointly by Intel and Micron, and Diablo Technologies' Memory1. They connect through the memory bus rather than the PCIe bus, yielding performance somewhat less than dynamic RAM, but better than anything connecting through the PCIe bus, and with capacities 10 or more times similarly priced DRAM. After that comes caching based on PCIe bus NVMe SSDs as well as host bus adapters (HBAs) and network adapters with built-in cache.

Another issue with caching is the software required. Some approaches are more transparent than others. HBAs and network adapters cache data on internal memory and work at the hardware level or through the adapter's driver software, making the process of setting up caching relatively transparent. But caching that uses system memory, flash-based DIMMs or PCIe NVMe SSDs requires the installation of caching software at the driver, OS or application layer, introducing a potential point of failure or performance degradation.

The DIMM-based flash caching advantage

The memory bus on server motherboards has much lower latencies and higher throughputs than a PCI Express bus used for host bus adapters, network adapters, SCSI adapters and other storage connections such as the SATA bus built into most systems. SSD and other flash memory speeds have reached the point where as few as two PCIe nonvolatile memory express disks can saturate a PCIe channel.

The next logical step is either a faster PCIe specification, which is at least a year away -- and then will only double performance -- or use the memory bus to enable faster flash. The latter approach could lead to performance dozens or hundreds of times faster than PCIe. Because many server motherboards have 16 or more DIMM slots, it's generally not difficult to find room for flash DIMMs.

Available DIMM-based flash technologies include Diablo Technologies' Memory1, which was the first to ship, and the more visible 3D XPoint. Both offer flash on DIMM that's cheaper, higher capacity and faster than dynamic RAM, with server-side caching as one of the target applications. Other uses include creating larger pools of memory for keeping applications completely in memory to counter the huge costs of high-capacity dynamic RAM (DRAM).

The drivers used for flash-based DIMMs must be able to distinguish between DRAM and flash DIMMs, and ensure that they're used appropriately. Since these technologies are relatively new, you'll want to test your applications and drivers before undertaking a full-scale deployment.

Caching appliances: Caching appliances sit between the server and permanent storage, and can be completely transparent to the application and OS on the server. They don't require opening the server or taking it offline to install new hardware. Performance improvements can be substantial, in the microsecond range, but will not reach the RAM- or DIMM-based flash cache levels.

Tiering options

Levels of tiering storage optimization techniques originally related to the types of hard disks available at the turn of the century, with 15,000 rpm hard disk drives typically serving as tier one, 10,000 rpm drives as tier two and 7,200 rpm drives as tier three. Each tier was slower and less expensive than the one above it. The advent of SSDs resulted in the addition of a tier zero so that we didn't have to rename existing tiers. Since then, things have gotten muddier. Faster tiers using newer, faster SSDs and even DRAM have been created, as well as slower tiers writing to tape or cloud storage.

Storage management or storage virtualization software ensures data is sent to multiple tiers as required. This is done transparently to the server, so files in a single directory can be stored in several different tiers but presented to the operating system as a single contiguous folder.

Performance of various storage devices compared

Here, the differentiator is storage management software that allocates data to the different tiers. One approach writes to the fastest tier first and then moves data that isn't read for a while to slower tiers. An alternative approach prioritizes data based on application type, designated directories or any of a number of criteria. Many SAN systems offer some form of automated tiering, which has obvious advantages over manual designation of tiers based on volumes or LUNs.

The first automated tiering systems often used a simple aging algorithm where any data that was read by the server was automatically moved to the fastest tier and older files that hadn't been accessed in a while were moved to slower, cheaper storage. Then predictive algorithms grouped files and moved associated files to a faster tier when one file in the group was read. Further refinements have since been added, all intended to yield the best possible performance for the most-accessed files.

Continuity or disaster recovery is a use case where tiering vs. caching storage optimization techniques can make a difference.

Tiering systems typically operate at the SAN-system level, meaning different storage systems generally have their own tiering. Data centers with multiple SAN systems from different manufacturers require either an additional storage management application or separate silos of storage for different applications.

Automated tiering generally provides the best performance and is less likely to require the storage administrator to regularly tune the system. However, there are occasions where a manual tiering system may be appropriate. For instance, database administrators might want all files for a database on the same tier, ensuring the highest level of performance, even if some files are accessed infrequently. This may be true of other applications or systems that only load certain files in emergencies or periodically for specific tasks such as backups or maintenance.

The DR difference

One primary example, or use case, where tiering vs. caching storage optimization techniques can make a difference is in continuity or disaster recovery. When data is first written to storage, the program writing the data waits for confirmation that it has been written. If the system is interrupted and confirmation fails to arrive, then the application can recover back to the state before the interruption. If, however, data is written to a volatile disk, such as a cache disk, and the program confirms the data has been written but power is lost before the data is written to its ultimate destination, then the data can be lost depending on the caching strategy used:

  • Write-through cache doesn't confirm a write until data has gone through cache and on to its final destination on permanent storage. This ensures data is safely written, but means that write latency is relatively high. Reads are accelerated, however, as data can be reread from cache rather than disk.
  • Write-around cache bypasses cache for writes, writing straight to the final destination, waiting until data is permanently written before confirmation. This approach improves write performance, but since data isn't in cache, read performance suffers.
  • Write-back cache confirms a write as soon as the data is written to cache, offering the best read and write performance because data is written to and read from the faster cache memory. But it has the potential for data loss if the system goes offline or loses power before writes are sent to permanent storage. You can offset this through battery-backed cache systems or duplication of writes to both cache and permanent storage.

With tiered storage, data writes aren't confirmed until they reach permanent storage, so the potential for loss is minimized. However, performance gains from tiered storage optimization techniques won't be as high as with the volatile caching offerings, since the system must wait for the write to permanent storage rather than just the initial write to the higher-speed volatile storage.

How many levels of cache?

It's possible to do different types of caching or tiering at the same time. For instance, a single server might have several different types of cache deployed in multiple locations, each doing its best to speed up transactions among systems. There could be L1, L2 and L3 cache on the CPU; small dynamic RAM cache on host bus adapters, network interface cards and RAID controllers; and large flash cache on the same controllers. Most caches operate transparently to the OS and even to the storage administrator.

Unlike tiering, most caching operates independently of other levels of storage -- the cache on HBAs is completely separate from the cache on a RAID controller or on the storage array. You can enable tiering at multiple levels, from software running on an individual server to storage management software like Datacore's SANSymphony to software integrated into the controller of a SAN storage array.

The only concern to storage admins is the caches that might lose data if power is cut off. Many RAM and flash caches have battery or supercapacitor backup power to ensure that writes can be finished in the event of a power failure. 

Additional use cases

You can speed up virtually any application through the use of tiering or server-side caching. The trick is to optimize the type of tiering or caching you choose based on the application needs. Optimum caching for database applications won't be the same as that for media servers, for example, or for real-time analysis of data lakes. Each has different requirements for ensuring the integrity of the data and types of performance that will suit the needs of the application -- i.e., read or write or both, random or sequential data, heavy IOPS or throughput. No two applications are exactly the same, so there is no one-size-fits-all product that fits every need, especially when you consider cost -- pricing ranges from more than $100 per gigabyte for the most expensive to under 10 cents per gigabyte for the least costly of these storage optimization techniques.

In many instances, caching or tiering can speed up storage performance enormously for a relatively small investment. A server-side cache or SSD-based tier zero in a storage array can improve performance by a factor of 10 or more and need only be 10% to 20% the size of the next tier down. There's no single product that's best for everyone. You determine what's best for your organization by taking into consideration the requirements of your applications and size of your budget.

Article 3 of 7

Next Steps

How to make the caching vs. tiering decision

Explore the differences between write-through and write-back cache

disaster recovery testing is critical to ensuring data access

Dig Deeper on Storage architecture and strategy

Get More Storage

Access to all of our back issues View All
Disaster Recovery
Data Backup
Data Center
Sustainability
and ESG
Close