No longer a luxury item for well-heeled data centers, SSD technologies are more affordable than ever and come in...
a variety of form factors with a choice of deployment options.
Solid-state storage technology has completely revolutionized consumer electronics products, replacing spinning disk drives in virtually all types of compact consumer devices such as mobile telephones and tablet PCs, and becoming the standard for ultra-thin laptop computers. The same benefits and enthusiasm for solid-state storage are now being felt in data centers.
While the number of units shipped for some types of hard disk drives (HDDs) is declining, solid-state drive (SSD) shipments continue to soar, and are quickly gaining acceptance for enterprise applications. Solid-state storage is a good fit for database applications, virtualized servers, virtual desktop environments and other workloads that need higher performance based on any of the following criteria:
- Transactional performance, measured in I/Os per second (IOPS)
- Total throughput or bandwidth, measured in megabytes per second (MBps)
- Lower round-trip time for I/O operations, measured in latency (milliseconds or microseconds)
Just as with HDDs, SSDs can be grouped into two basic categories: enterprise and client. Enterprise SSDs (and HDDs, for that matter) are rated for 24x7 operation, usually have better performance, typically have longer warranties and expected life, and cost more per gigabyte (GB). Client SSDs (and HDDs) are generally not rated for 24x7 operation, usually have higher capacities and cost less per GB.
NAND flash SSDs and writes
Due to the physics of the NAND flash used in SSD technologies, there are a finite number of writes that can be performed on each bit of NAND flash media.
Five biggest solid-state implementation mistakes
- Not understanding application workloads; for example, using SSD caching with a cache "unfriendly" workload.
- Not expecting that bottlenecks may move after deploying SSD technology.
- Not buying enough SSD to anticipate the demand.
- Not understanding the differences in use cases between server-side SSD and external SSD.
- Not understanding the difference between "fresh out of the box" performance and the "steady state" performance of SSDs.
Industry and SSD manufacturers use the concept of the number of full capacity overwrites than can be performed in a day. An enterprise SSD will be able to sustain writing several times its capacity every day for at least five years. A client SSD will usually be able to sustain writing less than its full capacity every day for many months. The term terabytes written (TBW) is often used to indicate this characteristic. Because of this, enterprise SSDs are suitable for applications that perform a large number of writes every day, such as database applications.
Form factor overview
SSD technology is available in several form factors, either by itself or combined in a hybrid fashion with other technologies such as HDDs. By itself, SSD technology can be implemented as a cache in one of several locations in the IT infrastructure or as primary storage, either in discrete devices or in large, all-flash storage arrays. In a hybrid fashion, SSD technologies can be combined with HDD technology to form individual hybrid devices or large hybrid flash-optimized storage arrays.
Disk drive form factor. The disk drive form factor is common packaging for SSD technology. SSDs are available in 1.8-inch, 2.5-inch and 3.5-inch "disk drive" form factors that use the same connectors and interfaces available for today's HDD technology, with the 2.5-inch size the most common physical size for SSDs. Capacities for these SSDs range up to 2 TB in a single drive today. SATA and SAS are common interfaces for SSDs in these form factors, just as they are for HDDs. The newer enterprise SSDs are beginning to use 12 Gbps SAS as the interface to deliver higher performance than was previously available. Drive form-factor SSDs are also available with older interfaces, but they're on the decline. Because of the higher performance of SSDs, new interfaces are being developed such as SATA Express, SCSI Express and Non-Volatile Memory Express (NVMe) that can accommodate the drive form factor. Expect to see these new interfaces available on shipping products in late 2013 or early 2014.
PCI Express (PCIe) card form factor. Another common form factor for SSD technology is the PCIe card form factor. These cards fit into a PCIe slot in a computer and provide fantastic performance with very low latency because the storage is directly accessible via the fast PCIe bus. Enterprise versions of these PCIe SSDs tend to be more expensive than the same capacity of a drive form-factor SSD, but the performance is usually better. The largest capacity PCIe SSD today is 10 TB, but the cost of this particular SSD is several times that of a typical server.
Many of these PCIe SSDs support PCIe 2.0 and require either an x4 or x8 slot. Some newer cards also support PCIe 3.0, which became available in new servers sold beginning in 2012.
One important aspect of this SSD form factor is the card’s physical dimensions. Some are half-height and half-length, which means they’ll fit into most servers, even the smaller form-factor servers. The full-height and/or full-length PCIe SSD cards have additional capacity but may not fit into all servers.
In addition, some of the larger-capacity PCIe SSDs require additional power beyond the standard 25 watts that can be drawn directly from the PCIe slot. Some servers are equipped with additional power connectors.
SSD-specific form factors. Because of their small size and low power requirements, SSDs are available in other form factors, many of which are designed for smaller applications. These include mini-SATA (mSATA), which is about the size of a business card (or smaller); it has capacities up to 256 GB, uses the SATA command interface and fits into a PCIe slot. Another is the µSSD (micro SSD), which is a silicon chip mounted directly onto a motherboard that appears to the operating system as a SATA interface storage device and is designed for very low-power applications such as mobile devices.
An interesting form factor for SSD technologies that may have enterprise applications is the dual in-line memory module (DIMM) slot form factor. These products are NAND flash and non-volatile DRAM devices that fit into a standard DDR3 DIMM socket but provide non-volatile storage capacity for a server. These could provide an interesting way to add enterprise storage to a server with many DIMM slots.
Hybrid drives and arrays
There are a number of hybrid SSD/HDD offerings that incorporate both SSD and HDD technology in the same offering. Individual hybrid drives are generally intended for consumer and desktop applications, and typically use the SSD as a cache in front of the HDD. Larger hybrid flash-optimized storage arrays are built with a combination of discrete solid-state devices (SSDs or PCIe cards) and separate HDDs, and are designed for enterprise storage applications. The SSDs in these hybrid arrays can be used as a cache or for tiering in a primary storage design.
Use cases and workloads
Although we've seen improved performance using SSDs for almost every workload in the Demartek test lab, there are some workloads that perform especially well with SSD technology. Database workloads work well with SSDs not only because of the higher raw performance of SSDs, but because of the reduced latency. We've seen significant performance improvements when using SSDs as a cache and as primary storage. Placing the application data entirely on SSD media (primary storage) will yield enormous benefits immediately for that application. We've seen performance improvements of 8x to 16x or more by using SSDs as the primary storage location. Unfortunately, you may not be able to afford that kind of configuration for every application in your environment.
While an individual HDD can provide hundreds of IOPS, SSDs typically provide thousands or tens of thousands of IOPS with a single device. In some online transaction processing (OLTP) application environments, including database and Web server environments, short response times are often critical. A transaction may require many successive queries to a database, where each query depends on answers returned from the previous query. In those cases, user response times are entirely dependent on how quickly storage can return answers to the entire series of queries. In many cases, we've seen sub-millisecond latency using SSD technology, which may be more important than the raw IOPS or throughput performance.
SSD caches and cache-friendly workloads
Deploying SSDs as a cache has the advantage of potentially sharing the performance benefit among many applications because the cache will improve any I/O that is "hot," regardless of the application. SSD caches are also a good way to get started with a relatively small amount of solid-state storage. Caches need time to "warm-up" and become populated with the hot data, which can take minutes or hours, depending on your environment. Workloads that have "hot spots" with repeated accesses to a subset of the entire data set are considered cache-friendly and will benefit from an SSD cache. We've seen OLTP workloads increase in performance 2.5x to 8x by using an SSD cache, depending on the cache offering, speed of the back-end storage and other factors.
More solid-state storage information and performance reports
Demartek Labs has additional free information available on the SSD Zone of its website. There are a number of performance reports for specific solid-state drive technology; a comprehensive SSD Deployment Guide is also available.
SSD caches can be deployed in three general locations within an IT infrastructure: server-side, in the network or in a storage system. Each of these has its advantages and disadvantages, so your choice will depend on your needs. In some environments, the ability to share the cache across different applications will be important; in others, not having to change the servers or back-end storage might be important. Server-side SSD caching has the advantage of offering much lower latency to the application, but moving an application to a different server (such as in a virtual machine environment) may require the cache on the new server to be re-warmed before the maximum benefits can be achieved.
About the author:
Dennis Martin has been working in the IT industry since 1980. He is the founder and president of Demartek, a computer industry analyst organization and testing lab.