Get started Bring yourself up to speed with our introductory content.

Server-side flash storage technology basks in spotlight

Dennis Martin walks you through the flash technology advancements that are helping make server-side storage challenges a thing of the past.

FROM THE ESSENTIAL GUIDE:

Flash options for the array or server

+ Show More

Today, there are many ways to install solid-state storage into servers, and the number of use cases for this type...

of flash storage technology has grown over the last few years. Everything in a server, from operating-system code to business-critical data, can benefit from the higher performance and lower latency of flash, especially when it is located close to the host processor(s).

There are lots of decisions to be made before you buy, however. So you need to know about the various form factors, interfaces, protocols and capacities involved, as well as the advantages, limitations and potential disadvantages of server-side solid-state storage.

Advantages of server-side flash storage

Back in 2010, we started using solid-state drives (SSDs) as boot drives for our servers and immediately reaped the benefits. These included reduced boot times, faster application load times and more consistent performance for any OS component or application stored on the SSD. We also observed smoother overall system performance when OS paging files were located on SSDs.

Comparison tests in our lab show that virtual machines (VMs) boot noticeably faster from flash storage technology than from mechanical hard disk drives, particularly when several or dozens of VMs are deployed on each physical host server. And latency-sensitive applications such as transactional database workloads experience consistently lower (better) latency when the database files, log files and temporary files are located on solid-state storage.

While there is a growing movement toward all-flash storage technology, not all data needs to be stored permanently on flash to reap its benefits. We have performed several public tests showing how relatively small amounts of flash storage used in combination with traditional hard disk drive (HDD) technology can be used as a read-cache or as a read/write tier to accelerate application performance. This acceleration works well when flash storage is located in an external storage array, but the acceleration is even better when it's located in the server, closer to the processor.

Form factors

Server-side flash storage technology is available in a variety of form factors, some well known and some fairly new:

  • Solid-state drive (SSD): This form factor is the same as the traditional HDD. The most common SSDs come in these sizes: 3.5 inch, 2.5 inch and 1.8 inch. SSDs in this form factor are available in a variety of thicknesses, ranging from the same as HDDs to as small as 0.2 inches or 5 mm.
  • Add-in card (AIC): This form factor is the typical add-in card that fits into a PCI Express (PCIe) slot. These are available in a variety of sizes, typically using some combination of the terms half height, full height, half length, full length or low profile.
  • M.2: This form factor is an internally mounted circuit board containing flash storage that is 22 mm in width and ranges from 30 mm to 110 mm in length. These can be mounted in special PCIe or SATA slots (see interface section below). M.2 devices can have flash chips mounted on one side or both sides, and tend to consume less power than SSD or AIC devices.
  • mSATA: This is similar to the M.2 form factor, but mSATA is designed for portable devices and not servers.
  • Disk on Module (DOM): This is a very small module directly mounted on a motherboard. It typically contains enough flash storage for a boot drive or embedded application.
  • NVDIMM (non-volatile DIMM): This form factor plugs into a DIMM connector on the memory bus and contains some amount of DRAM (volatile) and/or non-volatile memory technology, depending on the specific type of NVDIMM. They operate at memory bus speeds, which are typically faster than PCIe bus speeds, and require special support on the motherboard and BIOS/UEFI to recognize the different types of memory on these DIMMs.
SATA, SAS, NVMe drive backplanes
Diagram of the backplanes used for SATA, SAS and NVMe (SSD form factor) drives.

NVDIMM explained

NVDIMM technology is a way to put non-volatile memory (NVM) on the memory channel via dual inline memory module (DIMM) slots. The memory channel operates at faster speeds than the PCIe bus, so any storage placed here provides an even faster tier of storage (higher performance and lower latency) than the other form factors described above. It includes DRAM, NVM (currently NAND flash) or both and can be grouped into three different types: NVDIMM-N, NVDIMM-F and NVDIMM-P. Most require some BIOS changes and may necessitate OS changes to fully support. NVDIMM technology also fits the description of "storage class memory."

NVDIMM-N is a DIMM that contains both DRAM and NVM, but only the DRAM is visible to the system -- appearing as a standard RDIMM with typical DRAM capacity and latency. The NVM is not addressable by the host server and simply acts as a backup for the DRAM. There is at least as much NVM as DRAM on the DIMM so that all the data in the DRAM can be protected. NVDIMM-N behaves the same as normal DRAM, except that the DRAM contents are saved to the NVM in the event of a power loss with supercapacitors, providing power to copy the DRAM to the NVM. BIOS support for NVDIMM-N is required to perform the data protection functions. NVDIMM-N is also known as NVRAM.

NVDIMM-F has only NVM and no DRAM and is sometimes known as "memory channel flash." It uses block-oriented access to the storage and can map the NVM into the memory address space. NVM capacities are typical of SSDs, but the latencies are much lower than regular storage -- generally in the single or double-digit microseconds.

NVDIMM-P includes both DRAM and NVM and combines the functions of NVDIMM-N and NVDIMM-F onto the same module. The NVM is allocated into two areas, one to provide persistence for the DRAM and the other that is accessible as block storage.

Interfaces

Interfaces are the physical connectors (and cables, in some cases) associated with a particular device type. The interfaces listed below are the primary types used for storage devices, including flash, inside of servers.

  • SATA (Serial ATA): SATA is typically used for lower-cost storage devices and can come in the following form factors: SSD, M.2, DOM and mSATA. SATA is also frequently used for optical devices such as CD-ROMs/DVD-ROMs. This is a point-to-point interface, meaning that it supports one device per connection. SATA achieves speeds up to 6 Gbps.
  • SAS (Serial Attached SCSI): SAS is used primarily for enterprise-class storage devices in the SSD and HDD form factor. It can support up to 65,535 devices on a single connection and is widely used for enterprise storage arrays, JBOD (just a bunch of disks) enclosures and on-drive backplanes in servers. SAS supports dual-ported and wide-port drives, meaning that a single host can have a wider bandwidth connection, an "A" and "B" connection for failover purposes or two different hosts to access the drive at the same time. It currently delivers speeds up to 12 Gbps with a roadmap leading to 24 Gbps expected in 2018 or 2019, shortly after PCIe 4.0 becomes available. SAS backplanes support hot-plugging of SATA drives (see diagram above for more SSD form factor drives).
  • U.2 (formerly SFF-8639): This interface supports PCIe/NVMe SSDs. U.2 is a backplane interface that is backward-compatible with SAS devices, meaning that it can accept NVM Express (NVMe) devices, SAS devices or SATA devices. Typically, NVMe SSDs that fit in today's U.2 slots use four lanes of PCI Express 3.0 for speeds up to 4 GBps.
  • PCI Express: This interface supports the AIC and M.2 form factors for storage devices. While a PCIe bus can be up to 16 lanes wide, today's storage devices in the AIC form factor are typically four or eight lanes wide, yielding a speed up to 8 GBps. The M.2 form factor devices that leverage PCIe as the interface typically use two or four PCIe lanes.
  • DIMM: DIMM can now be used for storage technology such as flash and future types of memory. Currently, DDR4 memory modules support speeds up to 19.2 GBps with latencies measured in double-digit nanoseconds.

Protocols

A protocol is a set of commands that run over a particular interface. The SATA interface supports the SATA protocol and the SAS interface supports the SAS (or SCSI) protocol. A relatively new protocol known as NVMe is designed specifically for storage using current and future non-volatile memory technologies. It streamlines storage operations by reducing the number of processor instructions to complete an I/O request.

One of the goals of NVMe is to significantly reduce the latency in the host software stack so as to keep up with advances in solid-state storage hardware. NVMe currently supports any of the form factors described above that use the PCIe interface.

By assigning an I/O request and response to the same processor core, NVMe is built with a high degree of parallelism to take advantage of today's multicore processors. In addition, NVMe supports 64K commands per queue and up to 64K queues, far exceeding other storage protocols. This means it can support a huge number of outstanding I/O operations, resulting in significantly better performance than other protocols.

The roadmap for NVMe includes NVMe over Fabrics, which is a way to extend the benefits of NVMe over distance using either RDMA fabrics or Fibre Channel fabrics.

Capacities, limitations of server-side flash

Capacities of solid-state devices intended for use in servers have grown rapidly. A couple of years ago, the capacities of enterprise-class SSDs exceeded the capacities of enterprise-class HDDs (both 10K and 15K RPM) in the same physical size. Today, the rate of SSD storage capacity growth appears to be exceeding that of HDDs. For example, earlier this year, Samsung introduced a 2.5-inch 15 TB SSD, larger than any 2.5-inch HDD on the market.

There are limitations with server-side flash storage technology. The most obvious limitation is the price point when compared to other storage technologies, as the raw cost per gigabyte for server-side flash still exceeds the price for equivalent HDD technology. NVMe devices, in particular, are very expensive.

However, with NAND flash densities increasing dramatically, it now becomes important to consider the physical size of the flash storage device along with the price per gigabyte. The M.2 form factor, for example, allows a very fast boot drive to take much less physical space than a typical 2.5-inch HDD.

Another limitation is that storage "captive" inside of a server is difficult to share with other servers. This challenge is partially offset by certain server hardware configurations designed to work with some OS and hypervisor technologies that can share storage within a cluster. For example, VMware VSAN 6.2 provides an environment that makes sharing storage much simpler than in the past. For Microsoft Windows users, the upcoming Windows Server 2016 release (due later this year) will provide a feature known as Storage Spaces Direct that enables local storage to be used for high-availability environments with internal SSDs or HDDs or drives within JBODs -- supporting SATA, SAS or NVMe devices. Both of these technologies bring advanced features such as deduplication, compression, quality of service and more to local, server-side storage.

Hyper-converged infrastructure is another relatively new technology that helps ease the sharing of storage between servers. Since it combines compute, network and storage into a single unit that is designed to scale out, the limitation of captive storage becomes somewhat reduced for those applications that are designed to scale across server nodes in a distributed fashion.

Conclusion

Solid-state innovation allows us to rethink how we deploy storage and gives us the opportunity to break out of legacy architectures. With new form factors, physical densities, storage protocols and operating environments, server-side flash storage technology can provide some interesting ways to solve today's server-side storage challenges.

About the author:
Dennis Martin has been working in the IT industry since 1980, and is the founder and president of Demartek, a computer industry analyst organization and testing lab.

Next Steps

Navigate the complex flash storage technology market

Experts offer enterprise flash storage forecast

Enterprise flash storage adoption takes flight

This was last published in June 2016

PRO+

Content

Find more PRO+ content and other member only offers, here.

Essential Guide

Flash options for the array or server

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What advantages of server-side flash storage technology are most appealing?
Cancel
Everyone talks about how SSD is more expensive than HDD to buy, but I'm curious, what is the TCO? Does SSD require less power, or less cooling, perhaps? Because it doesn't have as many moving parts, does it fail less often, maybe? 
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close