Evaluate Weigh the pros and cons of technologies, products and projects you are considering.
This article is part of our Essential Guide: The evolution of data center storage architecture

Server-side flash storage systems can reduce latency

Server-side flash options include SAS/SATA drives, PCI Express cards and dual inline memory modules that place flash even closer to the application I/O.

There are two main types of flash storage systems being widely used in servers: SSDs and PCI Express (PCIe fla...

sh).

SSDs adhere to the same form factor as a traditional hard disk (usually 2.5 inches) and connect to standard SATA or SAS controllers, which makes them easy to install and service. And when it comes to random read/write operations, SSDs offer far greater performance than spinning media. However, these SSDs do not perform as well as they theoretically could because they were designed to be compatible with SATA or SAS controllers. SATA and SAS could be thought of as legacy technologies because they were created before SSD storage became a mainstream technology.

Before the widespread adoption of SSDs, the hard disk limited storage I/O performance. Today, many SSDs can handle a greater number of IOPS than what the controller can accommodate. Hence, the disk controller has become the limiting factor.

One workaround to this limitation is SATA Express (sometimes referred to as SATAe), which is designed to work with both SATA and PCIe storage devices. SATAe uses a connector that is backward compatible with existing SATA drives. But, unlike legacy SATA, SATAe is tied into the PCIe bus, which allows it to achieve a higher transfer rate than SATA 3.0. The exact level of performance that can be achieved depends on the number of PCIe lanes used. For example, a SATAe drive using two PCIe 3.0 lanes could deliver close to 2 GBps.

PCIe flash

PCIe flash technology has been gaining popularity rapidly because it is tied directly to the PCIe bus and bypasses the storage controller completely. This results in much faster performance.

To provide a more concrete example, consider that many servers are equipped with SATA III controllers. The SATA III specification has a data transfer rate of 6.0 Gbps, resulting in a maximum transfer rate of roughly 600 MBps. In comparison, many PCIe flash storage devices available today are based on the PCIe 2.0 specification and its maximum throughput of approximately 500 MBps per lane per direction.

Keep in mind that the 500 MBps estimated transfer rate for PCIe 2.0 is per lane. Hence, the number of lanes determines the overall bus speed. For example, a 16 lane bus could achieve speeds of up to 16 GBps. If you are wondering why the speed is 16 GBps and not 8 GBps, it is because PCIe lanes are bidirectional.

The PCIe 3.0 specification is even faster than PCIe 2.0, supporting speeds of approximately 32 GBps on a 16 lane bus. For now, most PCIe flash storage devices are based on the PCIe 2.0 standard.

It isn't just PCIe bus throughput that makes it faster, but its proximity to the CPU. SATA controllers are usually connected to a chipset, which is then connected to the CPU. Conversely, the PCIe links travel directly to the CPU, which results in much lower CPU latency.

Flash durability

All flash memory suffers from wear. This includes the memory used in SSDs, PCIe flash devices and any other type of flash-based storage systems. Each time voltage is applied to a flash cell as a result of a write or an erase operation, the charge is trapped inside the transistor's gate dielectric. This trapped charge remains in the transistor indefinitely unless it is removed from the transistor by a subsequent write or erase operation. This trapping and removing of electrical charges eventually degrades and destroys the cell. However, flash durability has improved dramatically over the past few years.

For example, late last year, Tech Report tested six SSDs to determine how much data could be written before the drive failed. Only four of the six drives ended up failing during the test. The drive with the shortest lifespan handled 728 TB of data writes before it eventually failed. This was far more data writes than what this particular drive was rated for. Other drives in the test were able to handle more than 1 petabyte (PB) of data writes, which far exceeded the manufacturer's specifications.

The main reason why SSDs and other forms of flash storage systems are more reliable than they once were is manufacturers include features designed to extend the life of the drive. One such feature is wear-leveling. There are various forms of wear-leveling, but the general idea is that write operations are distributed across the drive to avoid writing to specific cells repeatedly. This ensures the drive wears evenly, prolonging the life of the drive.

Caching vs. tiering in flash storage systems

Although flash storage systems can store data just like any other hard disk, capacity limitations tend to make it impractical for storing large amounts of data.

Although flash storage systems can store data just like any other hard disk, capacity limitations tend to make it impractical for storing large amounts of data. Even if an organization is unable to use flash for general-purpose storage, it may still be useful for caching or tiering.

Caching and tiering serve similar purposes, and while they are different technologies, many systems (such as Windows Server 2012 R2) can perform both functions.

Flash caching is often associated with write operations. In fact, a flash cache is sometimes referred to as a write cache. This type of caching is useful on servers that receive a significant amount of storage I/O. In such environments, it is common for bursts of inbound data to be received at a rate that overwhelms the server. When this happens, the storage array becomes a performance bottleneck. A flash cache can help with this problem by acting as a storage buffer. When inbound data is received, it is written first to the flash media on the server and then to the storage array. Because a flash storage system is so fast, it can absorb the inbound data and then move it to higher-capacity storage when the level of activity decreases.

Storage tiering could be thought of as a type of caching, but unlike write caching, the technology is oriented toward improving read performance. Every organization has some data that is accessed more frequently. When storage tiers are used, the operating system (or storage hardware) monitors the frequency with which individual storage blocks are read. Commonly read blocks are flagged as "hot blocks" and moved to the high-speed, flash-based storage tier. That way, blocks with the highest demand can reside on the best performing storage. In most cases, this process is completely dynamic. As a storage block begins to "cool off," it is moved back to the slower storage tier and replaced on the high-speed tier with a block that is in greater demand.

Next Steps

All-flash storage array purchase criteria

Network decisions need a second look with enterprise flash storage purchase

2014 Products of the Year finalists: Enterprise all-flash storage

This was last published in August 2015

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

What has been your experience using server-side flash storage systems to improve the performance of more demanding applications?
Cancel

-ADS BY GOOGLE

SearchDisasterRecovery

SearchDataBackup

SearchConvergedInfrastructure

Close