apops - Fotolia

Tip

What the 4 PMem App Direct Modes mean for your organization

Do you know the difference between the four Intel Optane persistent memory App Direct Mode types? Find out how you can use each of them most effectively.

PMem is short for persistent memory. It's the informal term for Intel Optane DC Persistent Memory Modules.

PMem is Optane in a DDR4 DIMM form factor. It only works with Intel CPUs, starting with Cascade Lake, and doesn't work with AMD, Arm or IBM Power CPUs. PMem comes in significantly greater capacities than dynamic RAM (DRAM) DIMMs. Whereas DRAM is typically available in 4 GB up to 128 GB capacities, you can get PMem in 128 GB, 256 GB and 512 GB capacities. It costs considerably less than the DRAM DIMMs with which each PMem must be paired. The reason for this pairing becomes apparent when looking at how PMem is accessed.

PMem's two accessibility modes are Memory Mode and Application Direct, or App Direct, Mode. Memory Mode uses PMem as higher-capacity, lower-performance memory. DRAM typically has read/write latency of approximately 14 nanoseconds. PMem latency is approximately 350 nanoseconds, or 25 times slower latency than DRAM. That's why DRAM sits in front of the PMem as a read/write cache to mitigate the performance difference.

Unfortunately, Memory Mode makes PMem volatile memory. When there's a power-failure, all data in the PMem is lost. In those situations, the application has no direct access to the PMem, because it goes through the DRAM. In other words, Memory Mode doesn't provide memory persistence. On the other hand, Memory Mode's biggest advantage is performance. The DRAM cache masks the slower PMem performance. When an application such as online transaction processing demands consistent data persistence, including during power failures, App Direct Mode is required.

Data persistence at near-memory speeds

App Direct Mode enables data to persist at near-memory speeds, comparable to non-volatile DIMMs (NVDIMMs) under the Storage Networking Industry Association NVM Programming Model, but at a much lower cost. App Direct Mode treats PMem as extremely fast non-volatile storage. There are three principal categories for this mode: raw device access, file API access and memory access. File API has two subcategories: file system and NVM-aware FS.

Each of these categories lets an application bypass different levels of software. Latency is reduced the more software levels get bypassed. There are advantages and disadvantages to each of these approaches.

Raw device access

Raw device access is designed to enable applications to read/write directly to the PMem driver in the host OS. The driver accesses the PMem, and the PMem address space is arranged in 512 bytes to 4 KB blocks. It's backward compatible for most applications, simplifying implementations. Raw device access is fast -- noticeably faster than the file system interface. But it's also not nearly as fast as memory access, and few applications use it today.

File API access -- FS

File system access is identical to applications using the file system API to read/write from any storage. The application issues file I/O calls using a file system API. The FS then talks to the PMem driver and to the PMem itself. It's simple and application-backward compatible, but it's also the slowest of the very fast App Direct Modes.

File API access -- NVM-aware FS

An NVM-aware FS takes advantage of fast NVM storage, such as PMem, and runs significantly faster than the standard file system. Many OS file systems have been made NVM-aware. NVM-aware FS is reasonably fast and easy to use.

Memory access

This memory access approach enables applications to access PMem as if it's DRAM, using memory semantics in byte mode. Memory access is handled through the Direct Access Channel. Intel has created a set of calls that recognizes the non-volatile memory on the memory bus. Memory access bypasses all the software layers the other App Direct categories require. This is by far the fastest methodology when accessing PMem. However, it's important to remember that applications must be rewritten to use the Direct Access Channel calls. Memory access isn't application-backward compatible.

PMem is a new tier that sits between memory and high-performance storage and is ideal for latency-sensitive volatile data, such as journals, write-ahead logs and even database redo logs.

Nevertheless, memory access is the App Direct category software vendors are using when performance and memory persistence are demanded. For example, both SAP HANA and Oracle use memory access. Oracle's Exadata X8M, which is on premises, as well as its Exadata Cloud Service X8M and Exadata Cloud@Customer, make unique and clever use of PMem. The Exadata system separates database servers from storage servers and interconnects them over 100 Gbps Ethernet. Oracle puts 1.5 TB of PMem in every storage server and none in the database servers. Every database server has access to every storage server using remote direct memory access over converge Ethernet. Each storage server's PMem is triple mirrored in the rare case a storage server fails. Oracle can put as much as 27 TB of PMem in a rack and 18 or more racks in a system. That's potentially half a petabyte or more in a single system.

Two new vendors have emerged that aim to bring the simplicity and speed of Memory Mode to the memory persistence of memory access. Formulus Black and MemVerge have software that enables applications to use memory access without any programming changes, making it backward compatible for applications. The software uses DRAM as cache, the same as Memory Mode, but it enables the PMem to be NVM persistent even when power is lost. DRAM is restored from PMem snapshots or backups almost instantaneously. Tests have shown exceptional performance from both vendors.

App Direct Mode access types

What does this mean for the enterprise?

If speed, low cost, simplicity, application-backward compatibility and no memory persistence is all that's required, Memory Mode will suffice. If persistence, application-backward compatibility, reasonable performance and the lowest cost is required, App Direct Modes of raw device access, file API FS and file API NVM-aware FS will work just fine.

If good performance and memory persistence are required, then memory access is the best bet. For those who don't want to make any adjustments to their applications but want the performance and persistence of memory access, Formulus Black and MemVerge are viable options.

Comparing Memory Mode and App Direct Mode

Tips for adjusting applications for memory access

The first thing to remember is PMem is more than just high-performance storage. It's a new tier that sits between memory and high-performance storage. This makes it ideal for latency-sensitive volatile data, such as journals, write-ahead logs and even database redo logs.

Keep in mind that the application no longer has to "flush" the cache when there's a power failure or outage. Failing to take this into account can reduce performance, neutralizing the advantages of memory access.

Intel has provided extensive persistent memory development kit libraries, so developers don't have to reinvent the wheel. They have an effective, easy to use API. Taking advantage of the development kit libraries saves time, effort and money, shortening time-to-market or completion.

One of the leading use cases for memory access is key-value stores. Persistent in-memory key-value (PMemKV) comes with an embedded key-value store. That means the application doesn't have to read full storage blocks because it accesses the keys and values directly from the PMem. There's no allocation of volatile memory buffers. Data is modified in place without read-modify-write-operations, greatly reducing write amplification. And even though PMem has 60 times the write endurance of NAND flash, write amplification reduces the life of the PMem over time.

PMemKV storage engine options are also worth taking advantage of. They enable sorted and unsorted concurrent engines, implementing hash maps, red-black, B+ and Radix trees. They're easily extendable and new storage engines can be quickly created to suit the application needs. PMemKV saves a lot of time. Without PMem, an application seeking a specific value must copy a lot of data from storage to DRAM and then search the data. With PMemKV, the application looks up the key, gets a reference to the value and accesses it directly.

Finally, there's PMem-aware Java libraries. This is a huge timesaver, allowing Java developers to use PMem without code changes. For example, Java VM can allocate Java object heap on PMem, again without code changes. PMem constructs increasingly are being adding to Java.

What it means

The bottom line for all this is Intel Optane PMem can have a significant impact on memory performance and data resilience if used properly. Knowing when to use Memory Mode vs. App Direct Mode and which App Direct category to use is key to using PMem properly.

Next Steps

Intel Optane Persistent Memory aims to fill the gap DRAM can't

Storage class memory makes its way into the enterprise

Dig Deeper on Flash memory and storage

Disaster Recovery
Data Backup
Data Center
Sustainability
and ESG
Close