Storage

Managing and protecting all enterprise data

Nmedia - Fotolia

Persistent storage-class memory to revolutionize data centers

Since it doesn't lose data during power outages, persistent memory will revolutionize direct-attached storage in particular and the cost/performance ratio of computing overall.

Things are moving backward. Only a few years ago, virtualization burst onto the scene and brought great economies to the data center. But virtualization made it bad to have direct-attached storage -- that is, storage within the server -- because the virtualization model requires the sharing of all resources, including processors, storage, code and data. Then PCIe SSDs roared onto the scene in 2008, and the move to eliminate DAS got some pushback.

Fusion-io was the first to show users they could do a lot more with high-speed flash storage in the server, even in a virtualized system. Soon sys admins learned this was also true of slower, cheaper SATA SSDs.

Today, storage is poised to move even deeper into the server, sitting right on the memory bus using storage-class memory, as advanced users migrate toward in-memory compute. Let's explore why this is happening and what changes will result from adopting this "new" storage technology.

Learning from history

More than a decade ago, semiconductor experts hypothesized that conventional memory types such as dynamic RAM (DRAM) and NAND flash were about to hit their scaling limit. They couldn't continue to be cost-reduced every year by shrinking the size of the transistors. This has always been a concern in the semiconductor world. Everyone worries about "the end of Moore's Law," a time when the fundamental dynamic of the chip business would come to a screeching halt, and chips would no longer drop in price year after year.

Everyone worries about 'the end of Moore's Law,' a time when the fundamental dynamic of the chip business would come to a screeching halt, and chips would no longer drop in price year after year.

After a lot of thought, great minds in the industry decided to embark on a path of new materials and technologies that could get past this barrier, creating a class of memory chips we refer to today as emerging memories. These would have done well except for one small problem. Other great minds, focusing on standard DRAM and NAND flash, found ways to bypass the much-anticipated scaling limit, taking these technologies to sizes, densities and low prices inconceivable only a few years earlier. Not only did they do this once, but since the advent of the new memory technologies, these particular great minds have repeatedly found ways to circumvent issues previously considered insurmountable.

So are we off-topic? Not at all. The changes these researchers envisioned led to today's interest in in-memory databases and other computing technology. These emerging memories -- and there are a lot of them -- have two advantages over DRAM:

  1. They have no scaling limit, or their scaling limit is well beyond that of DRAM. This means when DRAM prices can no longer be reduced, the emerging memories will take on that job, continuing price reductions over the long term.
  2. They're persistent. That is, they don't lose their data when power is removed.

This last point piqued the interest of yet more great minds at IBM's Almaden research lab in San Jose, Calif., more than a decade ago. Researchers there decided they should encourage programmers to write code that anticipated the emergence of memory that could double as storage, and they even gave it a name: storage-class memory.

Storage-class memory defined

According to IBM Research-Almaden, storage-class memory combines the benefits of a solid-state memory, such as high performance and robustness, with the archival capabilities and low cost of conventional hard disk magnetic storage.

Now, we all know that memory is memory and storage is storage, and if you have important data, you certainly don't trust it to memory. But storage is really slow. So how do you get the most out of a system with this horrible speed constraint?

SSDs have helped. An SSD is about a thousand times as fast as an HDD. Still, though, SSDs are a thousand times slower than memory. Wouldn't it be great if you didn't have to use either an SSD or an HDD, but could simply do your data manipulation in memory and leave it at that?

Best places to store data

Well, what if we could use some of that new emerging memory technology as storage? That sounds good, doesn't it? Trouble is it's wretchedly expensive. Given that these technologies were developed to provide memory cost reductions after DRAM stopped doing that, this seems pretty darn ironic.

It all has to do with the economies of scale, not to mention that every one of these technologies uses new material that is not nearly as well-understood as today's mainstay mass-production silicon technologies. To come even close to DRAM and NAND flash prices, the industry will have to figure out how to produce one or more of these memory technologies in large volumes. Trouble is there won't be much demand for any of them until prices come down close to those of DRAM or NAND flash.

What about Optane?

Right about now, you might be wondering about Intel's new 3D XPoint memory that's being branded as Optane. It's about to come out in a DIMM format in 2018. Won't this be cheaper than DRAM?

I know a lot about Optane. I wrote the industry's first market report on it in 2015. In order to make sense in a system, Optane must be cheaper than DRAM, and Intel will make sure that's the case. But that doesn't mean Intel will sell Optane at a profit. It's nearly certain Intel will sell Optane at a big loss for the first couple years it's on the market.

The reason Intel would do this is so it can sell more expensive processors. Intel can't sell anyone the next-higher-performing CPU if the motherboard it's plugged into slows performance to the point that it's no faster than its slower counterparts. Intel sees Optane as a necessary response to this problem, and the company is quite willing to lose, say, $10 on the Optane memory on every motherboard as long as that lets the company sell a processor that carries a $50-higher price tag.

This means we will soon have an excellent opportunity for system developers to create software and systems around persistent memory -- today's term for storage-class memory. Intel is about to provide its Optane DIMMs at prices below those of DRAM. And because it's persistent, programs won't have to count on slow storage to assure that memory will be intact after a power failure.

Intel Optane DIMM
Intel’s Optane DIMM, based on 3D XPoint memory

Industry organizations are already working hard to make sure adequate support becomes available even before the hardware shows up. SNIA, the Storage Networking Industry Association, has been shepherding standards through various standards bodies to keep programmers from having to reinvent the wheel. And SNIA's Persistent Memory Programming Model has been embodied in Linux revisions 4.4 and higher, as well as in all Microsoft Windows systems based on NTFS. Certain database programs, such as Microsoft SQL Server and SAP's HANA, are being adapted to storage-class memory.

That's a good start. But more programs are doubtlessly in the works. Microsoft has introduced a version of its Windows 10 Workstation that supports nonvolatile DIMMs (NVDIMMs), and that's likely to be a platform that spawns a lot of new persistent memory applications.

Programming without hardware

Economical persistent memory DIMMs aren't available yet, so how does a programmer test and verify new persistent memory software? You may not have heard of NVDIMMs, but these are a marvelous way of creating software today for the systems of tomorrow. An NVDIMM -- specifically an NVDIMM-N -- is a clever way of replicating the benefits of a persistent memory DIMM before persistent memory becomes available. This DIMM is based on the usual DRAM, but also contains a NAND flash chip and a controller, and has a dedicated emergency power pack wired in to cover for the period right after a power failure.

As soon as the power failure has been detected, the NVDIMM isolates itself from the rest of the system and looks to the emergency power pack for power. The controller takes over and commands the NAND flash to make a copy of all of the data in the DRAM. Once this step is complete, the NVDIMM powers itself down. When the computer's power is restored, the controller then makes the DRAM reload all of the data from the NAND flash and presents the processor with a DRAM full of valid data.

It's more complicated than this, of course. The NVDIMM can't isolate itself from the processor until after the processor's registers and dirty cache lines have been written into it, along with the processor status. All of this must be restored when operation resumes. But NVDIMM makers working with the software and BIOS communities have worked out the details. It's not difficult to take advantage of one of these.

Agiga NVDIMM-N
NAND flash and controller side of an Agiga NVDIMM-N

Rather than dwell on the details here, I will point you to a just-released report: "Profiting from the NVDIMM Market." There's a lot to NVDIMM and its nascent market, and this report explores that in depth.

Sharing persistent memory

All of this is great for DAS, but, remember, the world is virtualized, and virtualization requires shared storage, even if that storage is in memory. How can we share all of this fast storage?

When DAS-based SSDs started to migrate back into the server, this became an issue, and the great minds at several software startups rose to the challenge. They created code that let DAS appear as shared storage, or even made the DAS appear as not persistent at all and get treated by the server as memory. This left the storage in the shared pool, but helped relieve a lot of the network traffic and satisfy a larger share of the virtual memory system's page misses than was possible through the use of memory alone.

By this time, the network had become the data center's biggest bottleneck, however. The network was certainly fast enough for the HDDs it was conceived for, but it was a huge impediment to all-flash arrays and would be nothing but deadweight for storage-class memory. What to do about that?

Just like the effort to move storage closer to the processor, if you can move the network closer in, then you will not only have faster storage, but you can have faster shared storage. That promises to do great things for virtualized systems.

Great minds are once again working their magic. System architects have started to develop several ways to connect CPUs, memory and storage across multiple platforms at speeds that leave conventional networks in the dust.

The industry is defining a number of different standards to achieve this. Gen-Z, Open Coherent Accelerator Processor Interface, Remote Direct Memory Access and nonvolatile memory express over fabrics are the ones with the biggest followings. These efforts take on the arduous task of bringing the network closer to the processor -- just as has been done with storage -- at lightning-fast speeds, without causing serious coherency issues. But how do you keep the processors from corrupting each other's data? This isn't easy, but great minds are working at it.

In the end, expect to see systems in which storage of various costs and speeds is peppered across memory, SSDs and HDDs, all accessible to any processor through high-speed coherent networks. This should bring benefits to the cost/performance ratio of computing systems, making the data center of tomorrow perform in ways that would have been only a pipedream a few years ago.

Article 4 of 8

Dig Deeper on Flash memory and storage

Get More Storage

Access to all of our back issues View All
Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close