There has been plenty of talk about all-flash arrays and hybrid flash/hard disk arrays, but deploying solid-state...
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
storage in a server is a popular alternative and one of the easiest ways to implement flash storage. There are a number of ways to deploy server-side flash, including SAS/SATA disk form factors, PCI Express card-based flash, non-volatile memory express-compliant flash and dual inline memory module-slot implementations. In addition, server-side flash technology can now be used as persistent storage, cache and even shared among other servers in the cluster. And new form factors, such as NVDIMM, and functions are on the way.
Disk drive form factors remain popular, and come in three sizes: 3.5 inch, 2.5 inch and 1.8 inch. They fit into the same drive bays as hard disk drives (HDDs) and are typically hot-swappable. Some solid-state drives (SSDs) are the same thickness as HDDs, while others are thinner. The 2.5-inch SSDs are the most common size of drive form factor for servers.
Dell recently announced a rack server model that supports 1.8-inch SSDs. Nine 1.8-inch SSDs will fit into the same physical space as two 3.5-inch SSDs. If you need lots of IOPS in a small space, the 1.8-inch SSDs could be the solution for you. Capacities are also on the rise. For example, Samsung offers an enterprise-class, 2.5-inch SSD with 3.8 TB of capacity. We can expect more SSDs that exceed 2 TB in capacity to become available during 2015. Enterprise-class SSDs now exceed the capacities available in enterprise-class 10K rpm and 15K rpm disk drives.
Another common form factor for servers is the PCI Express (PCIe) card. These cards install into a PCIe slot and provide very fast access to storage. In addition to their capacity, they are frequently described by their physical size -- an important consideration for some smaller servers -- with terms such as full-height full-length (FHFL) and half-height half-length (HHHL). These cards offer tremendous performance because connecting directly to the PCI bus offers very low latency. A drawback is that they are limited to a single server and require a power cycle of the server to install or remove them. Many PCIe SSDs require a PCIe 2.0 x8 slot in the server, but some of the newer products connect via a PCIe 3.0 x4 slot.
Server flash categories
Due to how NAND flash works, many makers of enterprise solid-state storage have divided their product offerings into three use cases: read-intensive, mixed read-write and write-intensive.
Read-intensive solid-state drives (SSDs) are ideal candidates for applications where content is written once or updated infrequently, but may be read frequently. Apps with a 90% read and 10% write mix can be used with these drives. These types of SSDs provide excellent performance but are less expensive than the other categories noted here. They can work for boot devices in many cases.
Mixed read-write SSDs are designed for apps with a higher percentage of writes in their workloads. They are a bit more expensive than read-intensive SSDs, but not as expensive as write-intensive SSDs. These can also be used as boot drives.
Write-intensive SSDs are designed for those enterprise applications that perform many writes, such as database transactions and logging. They tend to be the most expensive of the three categories, but if you need high write performance, this is the device for you.
An increasing number of enterprise servers are being configured with solid-state storage as boot drives. At Demartek, we've been doing this in our lab since 2010, and we like how the operating system starts quicker and applications just seem to be snappier when loaded from SSDs. Boot drives typically don't require the same performance levels as mission-critical application volumes, so more reasonably priced SSDs can be used for server boot drives. Because of the performance boost, using an SSD as a boot drive is another way to extend the life of a server.
M.2 is a newer form factor designed for several different types of internally mounted devices, including SSDs. This card is 22 mm wide and has various lengths, ranging from 30 mm to 110 mm. These mount into a special M.2 PCIe slot and provide up to 480 GB of capacity, which is more than enough for a boot drive. This form factor is already available for laptop and desktop computers and will be available for some servers, possibly by the time you read this article.
A form factor similar to M.2 but slightly older is mSATA. These SSDs are mounted on a card approximately the size of a business card that is installed internally in a system. This form factor also started in laptop computers and may be used in servers, but the M.2 form factor will probably replace mSATA over time.
Server vendors are adopting another form factor from the consumer market known as microSD cards. This storage technology is used in some mobile phones and other small computing devices, and is expected to appear in some servers as boot drives. A server implementation will most likely use two microSD cards for redundancy.
The Supermicro SATA DOM (Disk on Module), also called SuperDOM, form factor is a proprietary form factor available on Supermicro servers. This is a very small flash drive that fits into a special SATA slot (we discuss SATA in the "Interfaces" section) in the vendor's latest generation of server motherboards. This drive has enough capacity, up to 64 GB today, for a boot drive.
Memory channel connected flash
There are currently two memory channel flash form factors: non-volatile dual inline memory module (NVDIMM) and memory channel storage. Both form factors use the memory channel for the read and write operations to the device. They also connect into standard DIMM slots and provide storage, but do so in different ways.
NVDIMM incorporates DRAM, flash, control logic and an independent power source, typically supercapacitors. It operates as DRAM and, in the event of an unexpected power loss or system crash, saves the data in the DRAM onto the flash. When power is restored to the system, the DRAM data is restored from flash. NVDIMMs are available today in capacities from 4 GB to 16 GB. Because of the relatively small capacities available, it is difficult to think of this as a large capacity storage device. But it is good for write caching, metadata storage, in-memory databases, memory queuing and similar operations that need full DRAM performance but with persistence.
Memory channel storage uses flash memory on a DIMM as a storage device. These devices are available in capacities up to 400 GB, but have single-digit microsecond latencies. There are a number of applications that require extremely low storage latencies that can take advantage of this technology. However, to make use of memory channel storage, the server motherboard BIOS/Unified Extensible Firmware Interface needs to know that memory and storage may be present in DIMM slots and be able to distinguish them. Some server makers are building motherboards with this capability today. There is currently a litigation action taking place between the two main companies that make these products, Diablo Technologies and Netlist, so supplies of the products may be constrained until the matter is resolved.
Some of the solid-state storage device form factors, such as the drive form factor, use a variety of interfaces, including SATA, SAS and PCIe/Non-volatile memory express (NVMe). Others use a single interface such as SATA or PCIe.
SATA has been used as a single-device storage interface for several years. Traditional SATA, as we know it today, has reached the end of the line with the 6 Gbps (0.6 GBps) interface. There will not be a faster version of traditional SATA. Instead, SATA is moving to SATA Express, which uses up to two lanes of a PCIe interface to achieve 2 GBps with PCIe 3.0 and 1 GBps with PCIe 2.0. SATA can be used for drive form factors, M.2, mSATA and SuperDOM.
Translating bits per second to bytes per second
Transfer rates for storage interfaces and devices are generally listed as MB/sec or MBps (megabytes per second), which is generally calculated as megabits per second (Mbps) divided by 10. Many of these interfaces, including SATA and SAS, use 8b/10b encoding that maps 8-bit bytes into 10-bit symbols for transmission on the wire, with the extra bits used for command and control purposes. When converting from bits to bytes on the interface, dividing by 10 is exactly correct. The 8b/10b encoding results in a 20% overhead (10-8)/10 on the raw bit rate.
Newer interfaces use different types of encoding schemes. Our popular Storage Interface Comparison reference page provides additional explanation of this topic.
Serial-attached SCSI (SAS) has been used for storage devices for several years and is moving forward with new versions. The current version of SAS supports 12 Gbps, can connect multiple devices and has a roadmap for doubling its speed to 24 Gbps in the future. SAS refers to both the SCSI protocol and the underlying physical interface. SAS is also planning to take advantage of the PCIe physical interface with SCSI Express that will carry the same SCSI protocol, but over up to four lanes of the PCIe interface. SAS is used primarily for the drive form factor, and both SSDs and HDDs are available with the 12 Gbps SAS interface.
NVMe is a software interface designed for solid-state storage that uses PCIe as the physical interface, so it can be applied to the drive form factor, PCIe SSD cards and any of the newer PCIe form factors such as M.2. NVMe replaces the traditional SATA or SAS command protocols with a streamlined protocol that runs over PCIe. This allows for much greater performance and much lower latency. In our real-world testing of these devices in our lab, we have seen multiple GBps of performance from individual drive form factor and PCIe card SSDs.
BIO: Dennis Martin has been working in the IT industry since 1980. He is the founder and president of Demartek, a computer industry analyst organization and testing lab.