Sergey Nivens - Fotolia

A history of flash memory and its rise in the enterprise

Flash memory plays a huge role in enterprise storage. However, 16 years ago, SSDs barely existed. Find out the story behind this technology and how it came to dominate storage.

Over the past decade and a half, the advent of flash memory has remade the enterprise. SSDs, which 16 years ago were a costly curiosity, have become prevalent. System architectures have been reconfigured around flash and new software to manage flash is widespread.

This history of flash memory in the enterprise looks at how and why these major changes happened as flash memory emerged.

There are two types of flash memory: NOR and NAND. They are non-volatile technologies based on a floating gate transistor that can store a bit of data. Trouble is, it takes a long time to move a bit on or off one of these floating gate transistors, on the order of tens of milliseconds. This renders these technologies unappealing for use in main memories, caches and processor registers.

Why did flash suddenly become appealing?

Few people realize flash-based SSDs existed in the 1990s, and there were even dynamic RAM (DRAM)-based SSDs in the late 1970s. Dataram Corp. launched an HDD replacement in 1976 built of core memory called the Bulk Core SSD. And SunDisk (a company that later renamed itself SanDisk) launched a NOR flash memory-based SSD in 1991.

Why, then, did it take until 2004 for mainstream computing to embrace flash SSDs? The answer is cost. In 2004, NAND flash prices fell below DRAM prices (see Figure 1). Because flash SSDs are much faster than HDDs, the lower price enabled them to fit between HDDs and DRAM in the memory-storage hierarchy.

While this price crossover allowed flash to gain widespread adoption, other benefits also emerged that drove even greater acceptance. Let's look at the speed and cost argument, and then at the other unanticipated benefits flash brought to the system.

DRAM and NAND price/GB
Figure 1. DRAM and NAND price/GB compared, according to Objective Analysis.

How cost and speed benefits changed the history of flash memory

SSDs are faster than HDDs, but, because they communicate through a disk interface, they're slower than DRAM main memory. They can improve a system's cost or performance if they cost less than DRAM. If they cost more than DRAM, then there must be some other reason to buy them.

Some systems used early SSDs because their processors couldn't address a large enough main memory, and swapping pages in and out of HDDs was too slow. However, SSDs' ruggedness was why they were most often used early on. One excellent example is a jet fighter that has a lot of vibration, which would cause an HDD's read/write head to frequently lose the track it was accessing, slowing accesses.

When NAND flash came out in 1991, it was well suited to SSDs. However, even though it was cheaper than the NOR flash that preceded it, its cost was several times that of DRAM, so flash-based SSDs were still too expensive for most applications.

In 2004, because of production volume increases and economies of scale, NAND flash prices dropped below DRAM prices. This changed everything.

Now NAND flash SSDs could be used to reduce the cost of the system. As the price differential between DRAM and SSDs grew, their use expanded.

In flash memory's history, NAND SSDs first gained acceptance in high-speed SANs. At that time, SANs used tiered storage that combined slower 5,000-7,500 RPM capacity HDDs with faster 10,000-15,000 RPM enterprise HDDs in a way that provided both capacity and speed. High-end models used only a fraction of a drive's storage to further accelerate enterprise HDDs. This approach limited head motion through a process called short stroking or de-stroking, which multiplied the cost per GB of these HDDs to about the same cost per GB as a flash SSD. The flash SSD was faster than the short-stroked HDDs, so naturally, these were the first HDDs flash SSDs replaced once EMC launched the first SSD/HDD SAN in 2008.

Flash memory history
Figure 2. Flash memory timeline.

Energy, hardware and software licensing savings

SSDs consumed less power than HDDs, and SSD suppliers were quick to define a metric that highlighted this strength: IOPS per watt. Not only did SSDs reduce data center power consumption, but they also reduced heat, providing additional savings on air conditioning costs.

SSDs had additional and, in some cases, unexpected benefits. Database users often were unable to process larger databases fast enough on one server. They resorted to sharding, or splitting the database among multiple servers that would each process a portion of the database at the same time. While this approach solved the speed problem, it also added complexity to database management.

When SSDs became an option, users found the new drives accelerated the systems enough to enable a single server to run jobs that were once sharded across several servers. This approach enabled them to use less hardware and cut software licensing fees because database software is usually licensed per system or processor. SSDs paid for themselves several times over in the first year.

How flash forced the rethinking of computer architecture

Flash didn't automatically fit into existing computing architectures. Decades of hardware design and software optimization centered around the idea that HDD was slow and would never accelerate. Once systems started to use flash SSDs with HDD interfaces and protocols, those interfaces and protocols began to come under scrutiny.

Figure 3 illustrates how the interface became the focus of latency concerns once SSDs came on the scene.

HDD and SSD latency
Figure 3. SSDs speed things up.

The blue portion of the bars represents the latency of the storage media. This is the HDD's disk, and it's the SSD's NAND flash. If you look hard, you'll also see a thin line of other colors on the right-hand side of these bars. This represents the time consumed by other parts of the access: I/O protocols, software overhead and even the time the CPU takes to service an interrupt.

In the HDD's case, this tiny portion of overall latency is inconsequential. It never received much attention when systems were tuned for speed. However, when SSDs slashed the media latency, then the time those other functions required became an appreciable part of overall latency, squandering about a quarter of an SSD's total latency. That's when flash memory shifted the focus for improvement to the interface.

From IDE to SATA 2 to SATA 3

Figure 3 compares two SATA 3 drives, but this problem was observed even before the original integrated development environment (IDE) interface SSDs were replaced with SATA models.

Latency concerns fueled the move from the IDE interface to SATA. SSDs that were originally supplied with an IDE interface moved rapidly to SATA in 2008. But SATA, with its 1.5 Gbps speed, was still a burden to SSDs, and SATA 2 quickly followed with a 3 Gbps speed. Still dissatisfied, the industry overcame complications in 2009 to launch the SATA 3 spec, supporting a 6 Gbps interface.

From Fibre Channel and SCSI to SAS and beyond

Meanwhile, the interfaces that were originally designed for the enterprise underwent their own transition. When EMC began to use STEC SSDs in its systems in 2008, the best-available interface was Fibre Channel (FC). Thanks to the work done to develop SATA, the SCSI interface evolved into SAS, a high-speed enterprise-worthy interface Pliant pioneered in 2008. This technology and PCI displaced FC and SCSI in the SSD market. As with SATA, though, the first SAS generation, at 3 Gbps, was too slow and 6 Gbps SAS II was introduced in 2009.

Still, SAS and SATA, rooted in HDD-friendly protocols, were clumsy for SSDs. Something better was needed.

Engineers at Fusion-io saw RAID cards and host bus adapters (HBAs) being used to attach multiple HDDs to a PCI bus, a high-speed interface introduced in the early 1990s, as a way to attach high-speed graphics cards directly to Intel CPUs. This approach connected parallel HDDs to the processor through an HBA card, letting storage communicate with the processor at a much higher bandwidth. The Fusion-io engineers realized that NAND flash would make even better use of the PCI bus bandwidth, and they created the first PCI SSD in 2008.

Several imitators followed that simply attached multiple SSDs to an HBA card. However, the designs didn't use a common command protocol, and some even used different architectures. Intel jumped in to standardize the PCI SSD interface and add high-speed SSD-specific protocols, releasing the NVMe protocol in 2011. NVMe improved speeds over SATA SSDs as shown in Figure 4.

NVMe and SATA 3 latency
Figure 4. NVMe speeds things up even more.

Figure 4 doesn't show the queuing capabilities of NVMe that enabled 64,000 commands per queue without causing a conflict. Although current systems don't yet avail themselves of this enormous queue depth, this capability will become important over time to further accelerate I/O accesses.

Virtualization, flash and DAS

An interesting thing happened on the way to using SSDs: Large system architecture migrated from local storage to shared storage and back again.

This cycle happened because of the timing of virtualization and VMware. In 1999, the virtualized OS was launched. Server farms that used to have one server dedicated to Outlook, another to the company website and so on, could allow any server to perform any task, directing resources more efficiently and lowering costs.

Now that any server could perform any task, the storage that went with those tasks had to be made available to any server. Local storage was abandoned and shared storage became the norm. Shared storage, with access times in the tens of milliseconds, was connected to the servers with a high-speed LAN that added a couple more milliseconds to storage latency. This necessary evil was seen as a small price to pay.

When SSDs entered, their latency was 1/100th or less of the HDDs they replaced, and suddenly the latency the LAN added was 10 times that of the SSD. SSD users could only use 10% of the benefit of the expensive SSD. Clearly this was unacceptable.

The solution was to move the SSD back into the server, but then virtualization wouldn't work. Fusion-io addressed this issue, releasing its ioCache software in 2013. IoCache managed direct-attached storage (DAS) as a cache to the shared storage, using coherency techniques fine-tuned for processor caches. This storage virtualization software gained rapid acceptance and is now a key feature in several systems.

The all-flash array alternative

Other vendors took a different approach, adding flash to existing system architectures rather than making the more revolutionary change of using flash in DAS. With this approach, users had to live with the LAN's latency but could take advantage of the SSDs in the storage array.

Conventional SANs were the first to do this, replacing the short-stroked enterprise HDDs in the system with SSDs. EMC was out front, introducing such a system in 2008. Other vendors made the argument that a system could be less complex and perform more consistently if it replaced all of its HDDs with flash. Texas Memory Systems, having produced DRAM SSDs for more than a decade, was one of the first vendors in this market in 2008.

Innumerable startups popped up between 2005 and 2017 to produce all-flash arrays, including Gridiron, Pure Storage, SolidFire, Violin Memory and XtremIO. (I count 59 all-flash array companies, though I've probably missed a few.) Established SAN vendors acquired most of these companies, but a few remained independent -- such as Pure Storage and Violin Memory -- with mixed results.

Managing flash: Caching vs. tiering

One interesting software change involved the way that data was managed. With HDD SANs, data was moved back and forth between fast and slow HDDs. This approach was called tiering. In a tiered system, there was never more than one copy of a block of data; multiple inconsistent copies couldn't coexist. But it takes time to swap data from a fast disk to a slow one and from a slow one to a fast one every time a block of data needed to be accelerated.

The broad adoption of SSDs placed a keen focus on interface speeds and the LAN bottleneck, resulting in new interfaces and new system configurations.

Tiering was even more of a problem with SSDs, so architects began to use the caching algorithms that had been fine-tuned in the processor since the 1960s. This approach had already been used in storage with memcached, a program written in 2003 that enabled a server on a LAN to cache data from a SAN. While the cache approach made it more challenging to assure data consistency, those problems had already been worked out for processor caches and simply needed to be implemented in the software. This software first began to appear in 2013.

Managing flash: Software-defined storage

Now that there was dedicated caching software to enable DAS to act as a cache for SAN data, system companies began to look for ways to flexibly manage data at all levels of the storage hierarchy. Ideally, those would be a piece of software that could be used in any system configuration no matter what hardware was attached. It would also enable expansion as users added hardware to support growing demand on their systems. From this concept, software-defined storage (SDS) was born in 2013. SDS not only supported this kind of operation, but some renditions enabled one server to examine and even modify the contents of another server's DAS.

From this new approach to managing higher speed storage emerged the idea that all storage could be buried within the servers, as long as communication between servers didn't impede data access. This gave birth to storage fabrics, the NVMe-oF protocol and later remote direct memory access (RDMA), an approach that removed any processor involvement when transferring data from one server's DAS to another's.

Big changes since 2004

We've spanned more than four decades with this history of flash memory, as we migrated from costly core-based SSDs, through economical flash-based SSDs, to today's widespread use of SSDs. The broad adoption of SSDs placed a keen focus on interface speeds and the LAN bottleneck, resulting in new interfaces, such as NVMe, and new system configurations, including the resurgence of DAS.

With these new configurations, data must be managed between the old and new elements of the storage hierarchy, giving rise to caching software, software-defined storage and today's work on storage fabrics.

All the while, performance increased at an impressive, if not alarming, rate. Systems and storage have become more efficient. But the history of flash memory story isn't over, and the best is yet to come.

Next Steps

SSDs vs. HDDs: When is all-flash storage overkill?

The pros and cons of flash memory revealed

Flash memory standards and interfaces every IT admin should know

Software-Enabled Flash for Hyperscale Data Centers

Flash Memory Technologies and Costs Through 2025

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close