ludodesign - Fotolia

Tip

Server-side caching has new data center role

Some server-side caching products have evolved into complete software-defined storage platforms that can aggregate flash across multiple servers.

The first use case for flash in the enterprise was as a large cache to automatically accelerate the most active segments of data. This approach was ideal because flash, at the time, was significantly more expensive than disk. Identifying which workloads to place manually on flash storage was time-consuming and required constant monitoring by busy IT administrators. Caching provided that automation, particularly server-side caching. Since many caching products are server-based, they also work around network limitations.

Server-side caching software caches I/O to a local flash device so that, at a minimum, read I/O may be serviced locally, reducing network traffic and latency. These software caches typically support any flash media installed on the server, allowing an IT administrator to pick the flash storage option that best suits their needs.

Server caching software can interact with I/O at three levels within the server:

  • File level. Some caching software products implement within the guest operating system (OS), caching data at a file level to enhance the performance of a particular application. However, hot data needs to be manually identified and isolated. The more specific assignment of flash resources allows file-level cache to minimize the amount of flash used.
  • Operating system level. OS-level caching products install within the OS and cache blocks of data based on data relevance. They automatically identify hot blocks of data on the server and accelerate them, but they are not as efficient as file-level caching. In a virtual environment, each virtual machine (VM) to be accelerated will need to have the software installed inside.
  • Hypervisor level. A hypervisor-based cache installs as a component of the hypervisor and caches I/O across all the VMs within that host. Hypervisor caches simplify implementation, but are the least efficient at using flash resources.

Shared flash catches up

As all-flash and hybrid arrays matured, the attention paid to server-side caching subsided. Both of these shared flash technologies benefited not only from the decreasing cost of flash but from data efficiency technologies like deduplication and compression. The combination of falling prices and data efficiency made all-flash arrays or hybrid arrays with vast cache tiers affordable. The storage network, both Fibre and IP (NFS, iSCSI), caught up with shared flash performance and minimized the need for server-side caching resources. In other words, the network -- combined with rapid flash response times -- could deliver I/O faster than the application needed it, so the locality of flash resources became less of an advantage.

As the data center becomes more flash-centric and less dependent on hard disk drives, server flash still has a role to play even as the storage network becomes faster. Applications continue to improve and can now push shared flash to its limits. Users are also becoming accustomed to flash performance and expecting more. Server-based flash products are keeping pace by expanding their capabilities and exploiting their locality advantage so that they become more complementary to a flash-heavy, shared storage tier.

Server-side flash gets faster

Server-side caching software caches I/O to a local flash device so that, at a minimum, read I/O may be serviced locally, reducing network traffic and latency.

While shared flash and storage network speeds have increased, they still do not match the performance of accessing local I/O directly from within the server. But server-based flash products are becoming even faster. The new NVM Express (NVMe) standard for PCI Express (PCIe) flash will increase the number of and compatibility of PCIe-based SSDs on the market. More PCIe options should mean more innovation at lower costs. Memory bus-based flash products are also more widely supported by server manufacturers. These platforms leverage the memory channel, primarily a private network for memory when communicating with the processor instead of PCIe, which must share its network with other I/O cards. The result is a new level of increased performance and latency reduction.

Both NVMe and memory bus-based products have latency so low that even a fast network will impact their performance. Server-side caching products remain a more viable way to tap into the full potential of these flash memory innovations.

Write caching becomes safer

Most server-based flash products can now safely cache write I/O to reading I/O. Caching write I/O can be risky; if there is a failure at the flash or server level prior to that data being written to a shared storage tier, data loss may result. To protect against this, server-side caching technologies will either mirror cache writes to another server or a shared flash area on the storage network.

Caching software meets software-defined storage

Some server-side caching products are evolving into software-defined storage platforms that aggregate flash, or even RAM, across multiple servers in the environment.

Some server-side caching products are evolving into software-defined storage platforms that aggregate flash, or even RAM, across multiple servers in the environment. They then present that storage as a virtual volume to create a large cache that all VMs can leverage for caching both read and write I/O. These cache volumes are safer than a single SSD in a server. The caching software can create a RAID-like protection so that a failure of a single flash SSD or server node won't result in data loss. This protection makes caching write I/O safe. The ability to use RAM is particularly interesting since most servers have extra available RAM resources and RAM is ideal for write I/O operations.

Server-based caching remains a viable bridge for data centers still working their way to all-flash or hybrid arrays. For data centers that have made the shared flash transition, a server-based cache product can bring further optimization by taking advantage of low latency NVMe PCIe flash or RAM to accelerate read/write response times. The two technologies should be considered complementary instead of competitive.

Next Steps

Server-side flash-based cache catches on

Big array vendors engage in server-side flash battle

Essential Guide on array vs. server flash

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close