Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Introduction to Big Memory

Discover why there are always tiers of memory and storage when it comes to big data and big memory, gaps that can cause performance issues, how a memory hypervisor can help alleviate these problems and more.

Download the presentation: Introduction to Big Memory

00:01 Chuck Sobey: Hello and welcome to Flash Memory Summit's Introduction to Big Memory. I'm Chuck Sobey with ChannelScience, and I'm also your FMS Conference Chair. We're glad you're able to join us this year, if virtually. We'll start this update by establishing what we mean by big data and big memory. We'll see why there are always tiers of memory and of storage. We'll touch on persistent media, recognizing that the clear leader is Intel's Optane. Then we will identify the gaps that exist in managing a heterogeneous memory architecture, gaps that are holding back performance. This will set the stage for other talks in this session that explain how a memory hypervisor like MemVerge's memory machine effectively bridges these gaps. In fact, we will see that the combination can outperform a homogenous memory solution, which costs much more. This talk concludes with a look at where the next opportunities for development might be found.

01:03 CS: The term big data has been around a long time. Before we answer the question, "What is big data?" Let's look at what big data is used for. Large data sets are used for solving big problems in math and science and medicine. This is especially true for high-performance computing. When rapid transactions are required like for real-time stock trades or recommendation engines, the entire data set should be in memory for the best performance. And of course, training AI models with millions of parameters requires accessing terabytes of data.

01:40 CS: So where does big data live, and how is it processed? Traditionally, data lives in storage and it works in memory, and the commute is awful. A hierarchy can be inferred from this figure. The on process or cache is at the top of the hierarchy. The DRAM on the motherboard and the storage on the network or bus are farther down the hierarchy, but larger in capacity, bringing the data from the SSD to the DRAM to the cache for processing, all takes power and energy. The memory and storage hierarchy is represented by this triangle, the corresponding hardware layers are identified here. The fast expensive memory is nearest the processor, the slower large capacity storage is farthest from the processor. This gap is added to indicate the large performance difference when the CPU needs to use data from the main memory rather than in its cache. Here's how Intel illustrates the performance and capacity gaps. Note the gap between getting data from the cache versus getting data from the off chip DRAM, and then the much larger gap from accessing data in the DRAM versus getting data from an archive like a high capacity HDD.

03:24 CS: So now let's answer the original question of what is big data. There are different definitions, but to me, big data is a data set that's too big to fit in your main memory. You have more data than you have DRAM. The typical way to deal with this is in the data center to divide up the data across multiple memories and then compute task among multiple processors. In PCs, paging in data from slow storage is another, although unsatisfying, approach. But as we know, the fastest performance is obtained when the data that the processor needs for its calculations is in its own DRAM. This is why we are driving to ever larger memories to solve problems involving ever larger amounts of data. Big data solves big problems and causes some, too. IDC has projected the compound annual growth rate of data worldwide will be 26% over the next four years. Our data is growing faster than our capacity to store it is growing. Over 20% of the data will be real time data, data that must be acted on in real time. This can only be addressed in memory. In the remainder of this presentation, we'll set the stage for the answers that the other presenters in this section, in this session rather, will provide to the question, "What if big data had bigger memory?"

04:55 CS: Big data needs big memory, and big memory needs big data. But in any relationship issues can arise. In this case, big memory can't just equal adding more data. DRAM is volatile and valuable real time data like stock transactions or reservations will be lost if power is interrupted or a glitch is encountered. DRAM is costly, as are the measures needed to ensure its data is not lost. Sometimes this is done with battery-backed NAND flash DIMMs, but that NAND capacity does not increase the available memory capacity. Another issue is this, the CPU tax. In order to get enough DRAM slots on a processor board it's often necessary to buy more compute than the applications require, server buyers call this a CPU tax. To get needed memory bandwidth or memory capacity, there's two approaches that are typically used, one is to buy expensive servers with more CPU sockets and memory slots, than the application may need, but use lower capacity DIMMs, another option is to buy more, but smaller systems with one, two or possibly four CPU sockets and populate the DRAM slots with higher capacity and higher cost DIMMs, what if we could use cheaper, higher capacity, non-volatile or persistent memory in the DDR bus? Flash is too slow, but there are other technologies to consider.

06:35 CS: There are many types of persistent memory or PMEM available or under development, these include various types of magnetic RAM, phase-change memory and resistive RAM, the persistent memory with the largest market share by far is Intel's Optane, utilizing their 3D Cross-point technology, which some characterize as a type of phase-change memory, this slide from Intel shows how their products address the memory in storage hierarchy we introduced earlier. It is interesting to note that the Optane SSDs themselves have tiers within them, we can see that on this PCB where we have the slow but high capacity QLC NAND buffered by the more expensive and much faster Optane persistent memory.

07:43 CS: Persistent memory is a growing market. IDC predicts the market for persistent memory will rise at a compound annual growth rate of almost 250%, this will result in a $2.6 billion market in three years, that's an attractive growth rate, and for perspective, note that the market for DRAM is huge, at about $60 billion a year, 60 billion per year builds a lot of fabs and funds a lot of R&D, the investment in persistent memory must continue and grow in order to achieve the capacity, that market growth requires and to increase the cost advantage it has over DRAM. This cost advantage can be seen in this comparison chart, the smallest capacity Optane DIMM, 128 Gigabytes costs less than one-third of the same 128 Gigabytes capacity of DRAM DIMM.

08:45 CS: Our friends, Jim Handy and Tom Coughlin project that the number of bytes shipped of persistent memory will reach parity with DRAM in 10 years, a lot of money will have to be spent on dedicated fabs and development in order to achieve that. After seeing those projections into the future, we'll turn our attention back to right now, what is it like to actually use Optane persistent memory? Optane is intended for use in a heterogeneous memory architecture, its two addressing modes reflect that, they are Memory Mode and App Direct Mode, Memory Mode uses the PMEM as main memory.

09:34 CS: The DRAM is simply, It's fast cache, in App Direct Mode, the DRAM is the main memory, and the PMEM is additional memory addressed by the application software directly, unfortunately, that means that the application needs to be re-written to accomplish that, and that's always a barrier to adoption. Especially in the data center. MemVerge is also speaking in this session, they've created a new hypervisor for heterogeneous memory, their memory machine hypervisor extract, abstracts the DRAM and PMEM interfaces, so the heterogeneous memory can be addressed as one large memory, it can use all of the Optane addressing modes without requiring application software changes, memory machine scales across 128 nodes and up to 768 terabytes of main memory, that's six terabytes of Optane per node, and anywhere from a 128 gigabytes to 768 gigabytes of DDR4 DRAM per node. The other speakers will have more specific details and results. 768 terabytes is a lot of memory, but some of it is slower PMEM. Now how does that impact performance versus an all DRAM homogeneous memory? This comparison reports benchmark performance for a 40 gigabyte MySql database, the results are for a variety of DRAM and PMEM configurations of heterogeneous memory. On this graph, higher is better.

11:18 CS: The reference is set by the All DRAM performance, 128 gigabytes of DRAM comes in at over 47,000 on this benchmark, whereas the Optane, only system of equivalent size achieves just over 75% of that mark. Does it in about one third of the cost however. Surprisingly, adding just two gigabytes of DRAM to the 128 gigabytes of PMEM, adds a big increase in performance, doubling that to four gets us almost with parity to the All DRAM solution, and going up to 16 gigabytes of DRAM with 128 gigabytes of PMEM actually beats the All DRAM homogeneous memory solution, asking MemVerge how they're able to do that would be a great question for their Q&A. Persistent memory is not only about making memory bigger, the persistence when properly managed, can make new features possible, for example, this slide compares restoring a snapshot, a various sized KDB+ databases using memory machine versus restoring from a log. The 220 gigabyte database takes 144 seconds to restore from a log, the 70%, larger 380 gigabyte database takes 3 1/2 times longer to restore, the 536 gigabyte databases were too large for the system tested to restore from logs, MemVerge is able to restore any of them, in about a second.

13:00 CS: This isn't just bigger memory, it's better memory, these performance gains are stunning. It makes me wonder if performance storage will be necessary in the future, or perhaps it'll be redefined or re-purposed furthermore, several new interfaces are under development that are designed for heterogeneous systems of memory and compute, they may enable the improvements MemVerge has made for local memory to extend to much larger pools, networked memory, the interfaces to watch are shown here, along with DDR DRAM interface, there's CXL, the open memory interface subset of OpenCAPI, GEN-Z, and CCIX. They trend toward serial interfaces, which require fewer pins than the parallel interfaces, this means there's more room for channels to be connected to the CPU, these interfaces support network connections, some near, some far, so that the memory can be accessed by non-local processors and accelerators.

14:09 CS: Some such as CXL and CCIX are designed to provide cache coherency as well. In conclusion, some developments I'll be watching for in big memory next year and beyond are that key elements of the data center, memory, compute and storage will be heterogeneous. From what MemVerge has demonstrated, the whole will be more than the sum of its parts. New interfaces will greatly expand the size and location of what is considered local memory, this should create opportunities for new features provided by management software with a light footprint, the emerging hardware, software and interfaces will enable new architectures that will start in the data center and will support real-time and bigger data problem solving. We'll no longer have to decide if we want our memory big or fast, with proper big memory management, it will be big and fast. Thank you for watching this presentation and be sure to visit our sponsor's virtual bids. I look forward to answering any questions you may have in the chat, and after the event, you can reach me at the email address shown here. Enjoy the rest of this 15th annual Flash Memory Summit.

Next Steps

Where the New High-Speed Interfaces Fit and How They Work Together

Dig Deeper on Flash memory

SearchDisasterRecovery

SearchDataBackup

SearchConvergedInfrastructure

Close