Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Four SSD best practices for efficient virtual machine storage

Learn how server-side implementations of SSD differ from storage-side, and how to get the ultimate performance boost in virtual environments.

Traditional application performance tuning considers two primary factors: CPU performance and input/output throughput....

CPU-bound applications benefit from bigger processors and input/output-bound applications need a faster aggregate storage. Flash storage -- which may be delivered as either PCI express devices or solid-state drives -- delivers huge IOPS gains and has revolutionized the best-practice guidelines for implementing storage systems. Latency has become a third element of application performance tuning, but IOPS is still where the biggest gains are to be made.

Virtual machines (VMs) have increased the pressure on input/output (I/O) throughput requirements at the physical level. The deployment of 10, 20 or even more VMs on a single physical server can solve the problem of underutilized computers. However, putting that number of applications on one device may increase IOPS requirements proportionally without any additional throughput capabilities. The result can be severely I/O-bound applications. This may lead IT groups to reduce the number of VMs per physical machine, but this solution only addresses the symptoms and not the problem.

More on SSD best practices:

Considerations for several SSD implementation types

Three options for adding SSD to an environment

Why SSD testing is crucial

When solving the IOPS problem, IT managers have the choice of either server-side flash (basically, cache) or storage-side solid-state drives (SSDs). Either choice can deliver thousands or tens of thousands of IOPS (depending upon quantity) but they are not interchangeable; this is where the issue of latency comes in. Server-side flash has no more latency than other system cache, assuming the data is flash-resident -- meaning the data accessing the flash has to reside on the server as the flash does. If the system has to issue a read command to the hard disk drive (HDD), then there is no benefit to flash.

Otherwise, the I/O request is sent across the network to the storage system. SSD in the storage array can deliver the data at cache speeds, again if it is resident. Generally, individual applications on VMs will benefit from server flash whereas widely distributed applications such as VDI or clustered file systems will benefit from storage SSD.

Given the dynamic nature of VMs, it may not be possible to make an either/or decision regarding which type of flash to deploy. Increasingly, IT managers will find it necessary to deploy both -- server flash and storage SSD -- when premium performance is necessary. As noted earlier, however, the benefits only manifest when the data is flash-resident.

Automated storage tiering (AST) products are widely available from major storage vendors and emerging vendors alike. AST software constantly monitors data access patterns to identify "hot data." As data elements are determined to be hot, the system automatically elevates it to a higher (faster) tier of storage devices. As data cools, it will be replaced by hotter data. This assures applications get the best possible storage service.

As IT organizations deploy dual flash architectures (both server and storage flash), they will learn that storage-only AST is insufficient. If storage AST software is unaware of the server flash, it may misidentify hot data because it is unable to accurately track I/O operations. It might also result in dueling algorithms between the flash systems, leading to inefficient or conflicting data movement.

To solve this problem, storage vendors are developing tiering software applications that can control data movement across all tiers, including server flash. Examples include EMC XtremeCache, NetApp Flash Accel and IBM FlashCache Storage Accelerator. These applications, and others like them, are designed to put IOPS where and when they are needed. Here are a few SSD best practices:

  1. Deploy both server and storage flash for I/O-bound applications or dense VM environments.
  2. Put as much flash in the server as practical, more or less limited by budget.
  3. About 3% to 5% of an array's total capacity should be in SSD, at least until experience dictates otherwise.
  4. Use AST software capable of managing I/O from processor to back-end HDD.

Not all vendors support a mix-and-match of different flash types and software, so be sure all the components are qualified together.

Dig Deeper on Solid-state storage