This content is part of the Essential Guide: Flash options for the array or server
Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Choose the right type of server-side flash

Eric Slack of Storage Switzerland outlines the various types of server-side flash available today and the strengths and weaknesses of each.

Putting flash into an application server to improve performance is a fairly straightforward decision, but choosing which flash type to deploy can be more involved. There are currently three main form factors of flash storage devices, each of which has its advantages. In this tip, we'll look at drive form-factor SSDs, PCIe cards, flash that's implemented on the memory bus, and to which use cases each is best suited.

Drive form-factor SSD s

These SAS and SATA drives were the original flash storage devices, after the USB memory stick. Sharing the same form factor, the first implementation of flash was as a replacement to hard drives, both in servers and in storage arrays. Today, they're available in capacities that range from a few hundred gigabytes to a TB or more, although 800 GB currently seems to be the amount in a high-capacity enterprise SSD.

SATA SSDs are probably the easiest type of flash to implement, since they simply plug into the existing disk-drive bays. Their other primary advantage is their low cost per gigabyte. Use cases include boot drives for servers and computers, and an easy-to-implement single-server caching or tiering solution, as long as there are SATA drive slots available in the server.

PCIe flash cards

Putting flash chips onto a PCIe card has become a popular high-performance SSD solution because it leverages the speed advantages of the PCIe bus over the SAS-SATA interface. Card sizes are both half-length and full-length, with a wide range of capacities, from a few hundred gigabytes up to several TBs. With this real estate, manufacturers can put multiple processors on these cards and actually allow users to subdivide them for multiple workloads or team them for a parallel processing configuration.

PCIe cards can also support caching software, as well as more sophisticated flash controllers for improved performance, efficiency and better integration with specific software applications. All this leads up to an ideal platform for high performance or high capacity.

On the downside, PCIe flash typically requires a driver because it's not based on standard storage protocols, like SAS and SATA. This makes implementation and drive replacement more complex, compared with SATA drives than can be hot-swapped in many use cases. They also need a PCIe slot, something that may not be available in a low-profile server. They are also more expensive than SATA SSDs.

Memory bus flash

The last of the three primary enterprise server-side solid-state storage types is a flash card that plugs into Dual Inline Memory Module (DIMM) slots. The first product, introduced by Viking Technologies several years ago, uses the DDR3 slot for power but doesn't connect via the memory bus. Instead, it uses a cable to connect to an available SATA header on the motherboard. The most common use case for this product is as a boot drive, but it's also being used in applications where more flash capacity is needed but PCIe or SATA drive slots are unavailable.

The newest DIMM-based flash technology does use the memory bus, offering the lowest-possible latency of all flash types. A host-level driver and an on-board ASIC enable the CPU to move data to and from the memory space and manage flash-specific tasks, such as garbage collection, write coalescing, etc.

The technology is referred to as Memory Channel Storage by its manufacturer SanDisk, which partnered with Diablo Technologies on its development. Memory channel storage allows flash to be used as system memory. Since flash is much less expensive than dynamic RAM (DRAM) and many servers have multiple empty DDR3 memory slots, this technology has the potential to expand server memory into the TBs, enabling much larger in-memory database applications.

A different driver is also available to make memory channel flash look like another storage area for use by caching or tiering software. This is the same use case as the SATA-connected flash DIMM mentioned above and the way PCIe or SAS and SATA SSDs are used as well. A big benefit of using DDR3 slots is to enable flash expansion in servers that have no additional SATA or PCIe slots available or to provide a boot drive that doesn't consume a drive slot. Capacities for currently available DIMM-based flash products go as high as 480 GB per module.

Server-side flash products are now being used in a wide range of applications, from boot drives to system memory expansion. With three form factors and three connection types, users have multiple ways to make flash fit their individual system constraints and application demands. Whether they need low latency; low cost; simple implementation; or advanced functions like on-board caching, tiering and application awareness, there are flash products available to meet most performance requirements. To summarize, the following is a list of the advantages of each of the primary types of server-side flash:

  • SAS and SATA drive form-factor SSDs. Simplest to implement, lowest cost per gigabyte
  • PCIe flash cards. Higher-performing than SAS and SATA SSDs, highest potential flash capacity, able to support on-board CPUs and software for more sophisticated use cases
  • Memory bus flash. Highest performing flash type, has potential to augment server memory
  • SATA DIMM flash. Performance similar to that of  SATA SSDs, but provides flash capacity or boot drive when no drive slots are available

Next Steps

Expect consolidation, uptick in flash arrays

Flash options in array vs. server

Comparing network-, server- and storage-based flash caching

Dig Deeper on All-flash arrays

Join the conversation


Send me notifications when other members comment.

Please create a username to comment.

Are you using server-side flash today? If so, what form factor do you use and for what application(s)?
Thanks for this simple summary Eric. I'm wondering if anyone has seen a direct comparison of performance between memory bus flash and PCIe bus flash. I've heard a lot of people claim memory bus flash is higher performance and lower latency, but I have yet to see a direct data-based compare.

Also, on PCIe flash, with the NVMe standard the downside you mention of requiring a custom driver goes away.