itestro - Fotolia

Problem solve Get help with specific problems with your technologies, process and projects.

How server-side caching can save cash

Server virtualization has pushed spinning disk storage to the brink, leading storage planners to opt for server-side caching products.

It is well-documented that server virtualization has pushed conventional spinning disk storage to its breaking point. Dense virtual machine environments (VMs) send multiple, randomized read-and-write IO requests to disk arrays, overwhelming the disk controller and resulting in increased storage latency and diminished application performance. To combat this issue, many storage planners are opting for server-side caching solutions.

Server-side caching is a good way to address the disk IO performance issue as it moves high-speed SSD resources directly where the problem lies -- within the server itself. When paired with intelligent software caching, organizations can greatly improve VM application performance, extend the life of conventional disk assets and forestall the purchase of a brand new storage array. In short, server-side caching can help save a lot of money.

The challenge, as always, is that there are multiple product offerings on the market. While some similarities exist in the features and functionality of these offerings, there are also many differences. Consequently, it’s important to understand what your specific needs are before you go shopping.

Separating performance from capacity

In general, all server-side caching products are designed to separate storage performance from storage capacity. In other words, high-speed SSD can act as the performance tier while slower, conventional hard disk drives can function as the capacity tier. This has the dual benefit of accelerating read IO while enabling the backend storage system to focus on protecting data and performing mostly write operations. Consequently, read-intensive environments will see read AND write IO improve simply by servicing read requests into cache.

Write-back caching

Certain write-intensive environments, however, cannot afford the added latency of traversing the network and storage protocol stacks to complete each write operation. To solve for this, some caching software technologies like SanDisk’s FlashSoft product, provide what is called “write-back” caching. This means that when an application makes a write request, data is written to cache and an acknowledgement is immediately sent to the application -- sidestepping the aforementioned latency issues. Furthermore, since only a small subset of data is active at any given point in time, FlashSoft will actually queue up multiple write requests to make longer, more sequential writes to SSD and then execute a write on the most recent transaction -- resulting in better SSD endurance.

Hypervisor or guest OS caching?

Another consideration is hypervisor versus guest OS caching. Hypervisor-based caching is the simplest way to enhance VM performance since through a single installation, all the VMs on the host will have access to cache resources. The downside is a greater potential for waste as not all VMs may require high-speed caching. Guest-level caching, on the other hand, is a more selective way to assign limited SSD resources. However, it generally requires more integration time as the caching software has to be installed in each individual guest OS. Caching technologies like Intel’s Cache Acceleration Software (CAS) offering, actually support both types of caching -- hypervisor or guest-based -- allowing administrators the flexibility to deploy caching software based on the unique needs of their multi-tenant infrastructures.

Supporting hot virtual machine migration

Perhaps one of the most important considerations for choosing a cache acceleration technology is how well the solution supports advanced server virtualization capabilities like server vMotion. Many offerings enable virtual administrators to create a shared cache that is outside of the host, for example, on a shared storage system that has non-volatile memory (NVM). In this manner, if a VM needs to be a migrated to another host, the contents of the cache can remain persistent in the shared storage NVM and be re-assigned to the newly created VM, resulting in no disruption to application performance.

Global cache pooling

Still, other solutions like Infinio and Pernix Data, allow sharing of SSD resources across multiple hosts, enabling server-side high performance without requiring an eviction of the local cache when VMs need to be migrated. In some environments, this may be the best of both worlds.


In order to maintain the ROI on virtual server infrastructure investments, organizations need a way to extend the life of their existing shared storage assets while enhancing VM performance. Server-side caching solutions offer a multitude of ways of achieving both ends simply and affordably.

Dig Deeper on All-flash arrays

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.