E-Handbook: How to get the best SSD performance Article 2 of 3

jules - Fotolia

Tip

Solid-state drive performance metrics go beyond latency, IOPS

In determining performance and lifespan of an SSD, companies must consider the drive's architecture, the storage controller and write amplification in addition to IOPS and latency.

Enterprise-class SSD vendors commonly market drives based on throughput, latency and IOPS to sell buyers on solid-state drive performance, but these specifications only tell part of the story. Other factors -- the drive's component architecture and how it handles write amplification -- can be equally important in determining how well a drive will perform over its lifetime.

Most SSDs implemented in data centers today are based on flash technologies. The components that make up a flash drive include the NAND cells that store the data, as well as a storage controller, interface and cache buffer, all of which play a pivotal role in solid-state drive performance.

NAND cell technology has evolved to now support greater capacities and drive down prices. The original flash drives were based on a single-level cell (SLC) structure that stored 1 bit per cell. Next came the multi-level cell drive, which supported 2 bits per cell, and after that, the triple-level cell (TLC) drive, which stores 3 bits per cell.

With TLC, flash drives can support higher capacities than ever. They exceed many of their hard disk drive counterparts. Unfortunately, TLC drives cannot always deliver the same levels of performance as the original SLC drives. Newer 3D NAND technologies promise to deliver both capacity and performance -- once manufacturing costs are brought in line with other NAND technologies.

Storage controller

Another important consideration when it comes to solid-state drive performance is the storage controller, a drive-specific processor that executes the firmware and handles such operations as wear leveling, garbage collection, encryption, bad-block mapping and error code correction. Because the controller plays such an important role, it must be able to carry out all storage-related operations, regardless of the drive's I/O workloads, even when running at full capacity. Anything less and SSD performance could be negatively affected.

The interface between the server and the drive is also a critical component in the SSD architecture. Two commonly used interfaces are serial-attached SCSI and serial advanced technology attached. SAS tends to offer more enterprise-class features and can usually deliver better solid-state drive performance.

That said, both interfaces create storage bottlenecks. For this reason, vendors offer the nonvolatile memory express interface, which works in conjunction with PCI Express to deliver better performance than either SAS or SATA.

An enterprise-level SSD should also include a dynamic RAM inline buffer between the storage media and the interface. The buffer should serve a high-speed caching mechanism that provides a temporary staging and collection area for the data. To carry out this process effectively, the buffer must be big enough to streamline data access and modifications and minimize the effects of write operations. The right buffer can be a critical component in a high-performing SSD.

Write amplification

Flash drives, like most SSDs, are susceptible to write amplification, a condition in which the number of actual writes exceeds the number of writes requested. Write amplification occurs as a result of the way data is written to an SSD. Before the data can be written, other data must be erased and then rewritten, unlike an HDD where data is simply added or overwritten. The increased IOPS that comes from write amplification can significantly diminish write performance.

When data is stored on a flash drive, it is written to pages, which are grouped together into blocks. In order to write data to a cell, that entire block must be erased, unless the block is already empty. If it is not empty, the old data must be copied, deleted from its original location and then rewritten to the drive along with the new data. This process can add a substantial number of writes, which not only affects SSD performance, but can also shorten the drive's lifespan.

To help improve write performance, an SSD usually implements some type of garbage collection process, which takes a proactive approach to freeing up previously written blocks. This process can help eliminate the need to erase entire blocks of data for every write operation. But the garbage collection process can contribute to write amplification and affect the performance of the primary write operations, if not handled properly.

Most SSDs also implement wear leveling processes to help prevent cells from wearing out prematurely. Wear leveling distributes the writes across the available blocks evenly to prevent the same blocks from constantly being subjected to erase and write operations. As with garbage collection, wear leveling can increase write amplification and potentially effect solid-state drive performance, depending on how it is implemented.

Other processes can also contribute to write amplification, such as bad-block management, in which the controller identifies blocks that contain one or more cells that might not be reliable for storing data. And defragmenting the drive offers no benefit to an SSD, but can add to the read/write overhead.

A common strategy to reduce write amplification and mitigate the effects of garbage collection, wear leveling and other processes is to overprovision the SSD, or limit the amount of a drive's usable storage to only a certain percentage. For example, some organizations limit usable storage to 75% or 80%, or sometimes even lower. By providing an ample amount of free space, the drive can support write operations more efficiently and maximize SSD performance.

In addition, SSDs can sometimes take advantage of capabilities within the interface for mitigating write amplification. For example, SATA offers the TRIM command, and SAS offers the UNMAP command. Both commands identify blocks of data no longer in use so they can be wiped out internally. This results in better solid-state drive performance because garbage collection processes can be minimized and more space can be made available on the drive.

There are other considerations that go into implementing SSDs in a data center, such as the available server and network resources as well as the operating systems running on those servers. But the drive's components and write amplification methodologies should be top concerns. Only by taking into account all these factors can organizations ensure that the drives they purchase will deliver the performance necessary to support their applications.

Next Steps

Should you believe SSD vendor performance claims?

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close