Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

How Can We Solve the Problems Holding Up Persistent Memory Adoption?

Many reasons have stopped persistent memory from gaining wider usage. Here is a look at some of the key issues and how you can solve them.

00:00 Dave Eggleston: Okay. B-6 panel, we're going to talk persistent memory. I've got some great panelists for you here today. I'm Dave Eggleston, and we are actually live, so anything could happen. We could have dogs barking in the background, garbage trucks going by. We've got Tom in outer space somewhere. So, stay on your toes, we're not quite sure where this is going to go, but hopefully you viewed some of the great presentations from some of the panelists that we're going to have today.

00:26 DE: And let me introduce some of those panelists. I'm going to start first with Jia Shi. She is the vice president of Exadata development at Oracle. Wave your hand there, Jia. I think she did already, yeah, that's Jia. And she's going to talk to us about Oracle Exadata and their use of persistent memory. If you haven't had a chance to view her presentation yet, please do that, it's an exciting one.

00:50 DE: And then we also have Ginger, Ginger Gilsdorf. Ginger is a software engineer at Intel, in the data center ecosystem engineering group. And she works with those enterprise software vendors to optimize their Intel hardware to bring persistent memory to life. We're going to have lots of great questions for Ginger as well.

01:12 DE: And then we've got Chris Petersen. And Chris is a hardware systems technologist at Facebook, and he's leading some of those roadmaps designing . . . He's been designing, building servers, storage and data center solutions for over 16 years. Compared to Tom and myself, he's just a baby in the industry, he's just getting started in comparison to old people. But he's very involved in CXL, so you're going to hear a lot from Chris about CXL.

01:37 DE: And then finally, we have the old hand, Tom Coughlin. Wave your hand there, Tom, from outer space, looking back at Earth. How is the space station there, Tom?

01:45 Tom Coughlin: Oh, very good, very good.

01:47 DE: Okay. Plenty of oxygen, you're going to make it through the session?

01:50 TC: I think so.

01:50 DE: Good. I've got too many paragraphs here for Tom, but Tom is the president of Coughlin Associates, and he's been doing digital storage analysis on both the business and technology side for . . . He says over 39 years, so that fine haircut he has there shows the rings around the sun that he's done, the times around the sun. Thank you for joining us.

02:15 DE: Again, this session is sponsored by SNIA, so SNIA will pay all our litigation bills if we get into trouble here. I'm going to start first with Jia. Jia is our best storyteller. She did a great job of telling us a story about somebody named Ben, who was trying to deposit $1,000 into his bank account. And using persistent memory, he saved 200 microseconds. Jia, why is saving... [laughter] How many?

02:47 Jia Shi: Four hundred, there were two I/O clips.

02:48 DE: Oh, there were two I/O clips. Thank you, I didn't pay close enough attention.

02:53 JS: No problem.

02:55 DE: First question, why is saving 400 microseconds... When you're doing OLTP, why is that important and how does persistent memory help you solve that?

03:07 JS: Thank you. Thank you, Dave. I'm really honored to be part of this panel and thank you for getting me to talk first. I'm excited about your question and thank you for . . . And hopefully enjoy the story.

03:21 JS: It was actually really a silly simple story as a point of illustration, because I think most people are familiar with the notion of a transaction when it comes to money and banking and all of that, right? It was really just kind of to draw an analogy between what a database needed to handle. And in our normal person's eyes, a critical transaction -- a banking transaction -- but in the real world, a lot of our customers are like . . . They have Exadata run their most mission-critical database workloads. And these are not necessarily just a "deposit $1,000 into my banking account," kind of transactions. We have seen cases where they're doing real-time fraud tracking.

04:06 JS: For example, if you submit, click on the screen, or submit any sort of transactions and whatnot, before this financial institution will let you go through, they have to pull a ton of background checks and financial record-checking just to be able to detect if this is a legit transaction or not. And for those critical business use cases, those 400 microseconds, or perhaps more, are really hurting the performances. Because you can imagine that you are basically waiting for the spinning wheel to finish its working before the control can go back to the actual application, which is super critical.

04:45 JS: That's why I spend maybe 30 seconds in the talk to explain, now this is just an illustration, but we do have a lot of mission-critical applications in the world that today are falling off those I/O clips that I had in the presentations, where they are normally processing super fast on the CPU, drawing everything from memory and CPU cache, and all of a sudden, they had to do an I/O, and then, boom, the performance just runs to a grinding halt. And that's why persistent memory is useful.

05:13 DE: Yeah. So, saving that additional time allows you to do other things in that time, is what I'm getting from you, is there's additional time to do the fraud detection, et cetera.

05:22 JS: Yeah. It's really just reducing the latency and improving the throughput. You turn an I/O bound workload into a CPU cache, perhaps memory-bound workload, closer to memory-bound workload. And that is a profound impact on a lot of those super-critical applications because they've never seen anything like that before.

05:40 DE: I see. So, it's still something where it's still storage dependent, but you are relying on that persistence that is there in persistent memory. And as I recall, were you using it in App Direct Mode in order to get that capability?

05:53 JS: Yeah, correct. We use the persistent memory in our storage tier. And as you have pointed out, this is critical because, sure, you can pop in a PMEM on the compute side, right? But that has great limitations because your persistent memory is only local to that node, it cannot be accessed by other nodes. So, in our story, in our architecture, we have a scale-out storage architecture where we have a shared cache model. What happens is there's a network between the client and the storage, but with that, it enables linear scaling, like linear boundless scaling on your storage, and now all the compute I needed to access this super-hot data, they can go direct to the persistent memory on the storage via RDMA. And that is really what we call a trampoline. Instead of falling off a cliff, you trampoline over to get the data into our compute, and then you spend . . . In our case, it's less than 19 microseconds. So, it's very close. It's much closer to a memory speed than the usual expected latency for doing I/O to the storage.

06:56 DE: Okay. Let's jump to Ginger now. Ginger gave us a fascinating talk. Ginger clearly has been watching a lot of Animal Planet during her time here, in COVID time, because Ginger talked about butterfly, sharks and wildebeests, so please view her presentation to understand that. But she used that as a very interesting analogy that she drew between how these animals migrate, and then how her customers migrate to using persistent memory. She said, "Butterflies were those that use storage today, sharks are those that have a whole lot of memory, and wildebeests are the hybrid."

Ginger, first, please identify yourself as butterfly, shark or . . . How about your customers? Where are your customers? Are they more butterfly, sharks or wildebeest? And what does that mean? Just give us some more context around migrating these customers to using persistent memory.

07:55 Ginger Gilsdorf: Well, honestly, I would want most of our customers to be in the wildebeest side of things because that's where you get the full benefit of persistent memory. That's where you're taking advantage of the persistence and not just using PMEM as a large capacity of memory. That's where we find customers, like I said, get the full advantage of PMEM, and that's the really exciting, the game-changing side of things, but it can also be the more difficult to implement. But to go back to, hey, you do the work to implement it the right way, and you see, much like animals, when they migrate, they do it for a reason, and there's great benefit. If you migrate your data into the PMEM App Direct side of things, you're going to get not just one benefit, usually multiple.

08:45 DE: So, it really depends on the customer and the application they're coming from, and then it sounds like how much work they're willing to put into that migration.

08:54 GG: Exactly.

08:54 DE: Maybe give us a little more detail around that and some of the examples that you've seen working with customers.

09:00 GG: Yeah. For the actual work that goes into modifying software, the standard answer would be, "Go look at the libraries and Persistent Memory Development Kit," but I can add a little bit more color to that. If you currently have a data set in DRAM, you've grabbed some memory with a malloc of sorts, and you've got a pointer to that memory. You're writing to it, reading from it. If you need it the whole time your application's up, it stays there. Otherwise, you can free that memory up.

Well, if you want to use persistent memory in a similar fashion, then what you would do is memory map a file. Once your PMEM-aware file system is set up, you memory map that file, and then from there, instead of a malloc, you're going to, like I said, memory map the file, and then you can work directly with reads and writes to that file. And when you're done with it, you'll go through similar to a memory map/unmap.

10:02 GG: And if you view it that way, it doesn't seem like such a huge change. I think some companies get a little bit scared that it's going to be too much work, but if you can get it to a state where you're almost swapping in the memory mapping instead of the malloc-ing, it becomes a lot less overwhelming.

10:21 DE: One of the things I'm going to want to come back on and talk about later and ask you about later is, there was a question yesterday that came up or almost a statement, which said, "Persistent memory is great, but the ecosystem for making the transition is not really there." I want to come back and talk about that a little bit later, because I think that's important, and that's a question that seems to come up regularly, is, "What is that ecosystem? What's Intel doing to help people migrate?" A key one.

10:49 DE: Chris, you'd said during your presentation . . . And thank you, Chris, I know you're going to talk about CXL. You're just going to beat us with that CXL stick, but I think it is a good way for the industry to go, get consolidated. But you've made a comment, you said, "DIMMs are not suitable for heterogeneous memory." Chris, what does that mean, and why do we care?

11:13 Chris Petersen: Sure. I actually think it comes back to your comment just now about ecosystem. Some of the challenges that we've got is, as we try and drive for further efficiency within the server designs, we have to make trade-offs, typically. And thus far, some of the trade-offs we've had to make are that if we put different types of media on a DIMM, that media controller is embedded within the CPU. And so we have to make some trade-offs there.

We cannot support all types of media. There is some mixing that has to happen. You have to be able to deal with the different characteristics of that media on the same bus, as an example. And DIMMs were ultimately designed for very specific purpose originally many, many years ago now. They were specifically targeted at volatile memory and therefore, they have very specific power and thermal boundaries. The pinouts are very precise, and those are all very good reasons, but I think the challenge that we've got is, as some of these additional media come out, there are perhaps better ways to consider packaging those into more efficient solutions, in general.

12:35 DE: For example, let me interrupt you, I would imagine Facebook does not like throwing out all your DDR4 modules when you go to DDR5-based servers. Would you indicate that that's an issue for Facebook? You want to have that abstraction to memory?

12:54 CP: Yeah. I would say, in general, having that abstraction is important for us because it allows us to more seamlessly move applications. In general, it's very difficult with the thousands of microservices that we have to support in our infrastructure to be able to do step function moves. It's got to be much more of a seamless . . . As transparent of a migration as we're able to do. And it also needs to align best with what is most efficient for different categories of applications. A move from DDR4 to DDR5, for example, is not necessarily the right answer for all applications. You may want to right-size that. So, enabling that flexibility and the abstraction on top of that makes that much more seamless, plus you can right-size it to the application.

13:50 DE: How does CXL solve this, Chris? What does CXL do to address these kind of problems and give you this ability to mix and match different types of memory using the CXL abstraction?

14:04 CP: Sure. CXL, for the first time, really allows us to pull the memory controller or the media-specific controllers out of the CPU itself. Now, we have the ability to have a CPU that has its set of capabilities, and then you can separate the memory controller for a particular media type, and you can optimize then for that specific media type. You can optimize in terms of bandwidth, in terms of latency, power, thermals and so forth. And so, it lets you create the more efficient solution there and ultimately, create a more scalable solution. CXL lets us do this because we now have an abstraction there, both at a physical and a protocol layer perspective. And now that we're introducing CXL 2.0, we're taking that a step further by providing a generic management interface as well, such that we can now create standard drivers that will interact with any CXL memory device, regardless of the underlying media. And then that ties into the whole transparency and seamless approach that we're trying to build here.

15:18 DE: I'll come back and ask you a bit later about CXL, the consortium, and then how people can get involved. And I see we had Yao join us. Hi, Yao. Good to see you. There she is, good. Yao Yue is joining us. She's an engineer and manager working at Twitter platform, and she had a great presentation. She's been working on distributed cache since 2010. And then she started and managed the Infrastructure Performance and Optimization team since 2017. And she's been talking . . . I saw her at PM Summit earlier this year, did a great talk there, and gave us quite a good update, so please view her presentation.

One of the questions I have for you, Yao, is you talked about persistent memory, as denser storage, gives you a TCO benefit. How much benefit are you getting at Twitter? Can you give us some idea how much benefit you get in going to persistent memory instead of straight to storage?

16:18 Yao Yue: Well, if something is really bonded by storage capacity, then it really depends on what kind of pricing Intel is willing to part with.

16:27 DE: So, we put pressure back on Intel? Okay. Go ahead though, Yao.

16:32 YY: But I think the tricky thing about actual service is there are . . . Many things come into play. It boils down to where the bottleneck is. I think what we have seen is . . . To truly answer this question, you need to ask, "Where is the current bottleneck? Does using PMEM shift the bottleneck?" And the game will be dependent on that. Using cache, for example. Previously, we are largely bottlenecked on memory capacity, so we're expecting some decent savings by going to persistent memory. But on the other hand, we need to be careful that we're not introducing persistent memory bandwidth or latency as a new bottleneck for throughput.

17:19 DE: Right. Yeah, in your presentation, you made that really clear that you didn't want . . . You have a networking bottleneck right now and you didn't want to go past that. You didn't want to push that bottleneck into persistent memory. And then towards the end of your presentation, you talked about this new architecture and you made a comment in heading into that. You said, "Persistent memory behaves more like an SSD than DRAM." What do you mean by that? And how does that impact your architectural choices going forward?

17:47 YY: It really favors, strongly favors sequential read and write, is what I'm alluding to when I said that. Of course, we want to take advantage of the smaller granularity, but I think . . . Especially if someone comes from a perspective of using DRAM, I think that is a nice summary to have, like, "How can you think more like your programming storage device versus DRAM?" Of course, if you come from the other side, then you already know that there is nothing really to add there. So, I just wanted to call out . . . For people who have similar experience as I had, which is heavily reliant on memory, then I think that is a paradigm shift.

18:34 DE: And I think another thing that came out in your presentation was, you worked very closely with Intel and you talked about how, when Intel ran it in their lab, they got a certain result, but then when you ran it to your lab, well you didn't get quite the same result. And the message I got out of that was make sure you do your own homework, do your own work. Talk about that a little bit more, how you worked with Intel in a collaborative way to get persistent memory up and running.

19:00 YY: Yeah. We started with Intel and started early because I knew . . . Get to the right software design is not going to be straightforward. I'm always expecting some wrinkles, as with most new hardware. Working with Intel really gave us a head start because they have the equipment ready to be tested. On the other hand, their lab will have a particular configuration that may or may not be the same as what eventually consumers will have. Specifically, they can populate all the DIMMs, and most certainly Twitter does not want to populate all the DIMMs with PMEM that we can afford. That's just too expensive. So, this type of thing . . . I think it's really important to understand the configuration and the key factors in the configuration, and make sure we test those thoroughly.

19:50 DE: Got it. Okay. And then, Tom, during your talk, you showed a very interesting chart, I want to pull up something here, and it was really about the market forecast for persistent memory. And you showed a 3D XPoint, in particular, on a petabyte basis getting very close to, maybe even getting equal to DRAM shipments by the end of this decade. And that's pretty great growth because . . . What do you think this means for the memory makers? Because right now, today, it's really only Micron manufacturing this for Intel, but what do you see occurring in that as that market grows, and what will happen in the supply of the persistent memory technology itself?

20:30 TC: Sure. Well, first of all, I should point out that that vertical scale is a log scale, so there is some difference between them. But, yes, there is the 3D XPoint, in particular, we show increasing demand. In fact, Intel apparently is now finally making money on their 3D XPoint, which was actually . . . And so they must be making enough volume now that they're able to amortize their equipment.

21:00 DE: Well, I will interrupt and say, check with Mark Webb. He says a little bit different. He says, they're still losing money. Ginger is nodding her head. I won't take that as confirmation about Intel either way, but go ahead, Tom.

21:11 TC: I would say they're not losing as much money then. The other thing is that Micron has finally, last year, introduced their own 3D XPoint product and they say they have some customers that are using it, although they can't say in public. At least, when I spoke to them last week, that's what they said. I think that using some 3D XPoint, its biggest use is in replacing DRAM. And so, the DIMMs or whatever that would be, when you were in the CXL-type environment, there are some real possibilities there. And I think that that's . . .

But there also are a number of manufacturers now that are designing storage systems that are using 3D XPoint in SSDs as well. It looks like there is some take up on it, there is use for that as a way . . . I've seen a few vendors now that are using Optane as a write cache, and using higher density flash, like QLC flash, in order to reduce the wear on the QLC flash and to make a low-cost but high-endurance storage system as a result of that. At least that's the claims. And actually, there's a question you've had in there, Dave, that I just looked there, and you're asking about cloud providers . . . who provide system with 3D XPoint.

22:38 DE: Yeah. Why don't you grab that question? I think that's a great one, and I'll read it out loud in case people are not seeing it.

22:43 TC: I will read it, yeah.

22:44 DE: Yeah. Thanks. Go ahead.

[overlapping conversation]

22:47 DE: Because we got Facebook, Oracle and Twitter here, which all have cloud services. They can answer those questions for us. Go ahead, Tom.

22:53 TC: Indeed. But your question was, you're wondering if the endurance of 3D XPoint makes this unsuitable for the cloud, since the evildoer can intentionally wear out the 3D XPoint DIMMs. Now, if someone is using this as infrastructure, they probably have direct control of that. So, I think the basic issue that would be for a public cloud. But with a public cloud, you're going to be paying for what you're using, so it's probably fine if they wear them out, they're going to pay for it. That's the easy . . .

23:19 JS: But I would add . . .

23:19 DE: And by the way, that question isn't from me to myself. That's Marty in the background typing the audience questions to us. Yeah, I'd like to hear, Jia, please go ahead. Oracle, and then Twitter, and Facebook . . .

23:32 JS: I want to just say 3D XPoint is already available in the Oracle public cloud today. In fact, we actually launched it, I think, back in September, so it's out for a couple of months now. And it's really . . . Just to answer that point, what happened is the 3D XPoint, the persistent memory that we have, as I told the story in my presentation, it was used as a cache and then log commit accelerator on the storage side. Echoing what Tom just said, it's very much a control to workloads. It's driven by the database workloads, it's governed by how many random reads that the application is going to issue and how many log write applications it's going to issue through the data.

24:14 DE: I see. So, very specifically for Oracle, it has to do with the workload and if it's the right workload, then you'll steer it to persistent memory.

24:20 GG: Right, right. So, there is really no wearing it out.

24:24 DE: And then, Yao, what about for Twitter? I remember you saying earlier this year that Twitter had that in production.

24:32 YY: Yeah. We had in production as canary, we haven't got to volume. One of the hurdles we are waiting to clear is we actually do want to accumulate more workloads that benefit from persistent memory to get the inventory more manageable. I think for anybody who is not renting persistent memory, that might become a thing. Where, if you only need it for 100 posts, it doesn't really make too much sense. If you need it for 1,000 posts, that is much more reasonable. I think there is going to be a delayed gate-keeping effect just on inventory management.

25:07 DE: I get it. So, if enough workloads need it, then it makes sense to deploy it. How about for Facebook, Chris? Where are you at on your examination of persistent memory and deployment? I haven't heard of Facebook deploying it yet, but maybe you can break some news for us.

25:23 CP: Yeah. We've been exploring 3D XPoint as well as any of the alternate media tech for quite some time now. Yao stated it very well. In general, for us to productize anything, there needs to be a sufficiently large enough TCO benefit, and that therefore implies that there has to be enough application volume to justify the effort. At this point, that does not yet exist. Our needs are not currently well enough aligned with what we're seeing out of 3D XPoint, and so we will continue to explore it, and we are working closely with Intel and others on making some improvements there, but it does not currently align well with our application requirements.

26:17 DE: Got it. And then, Chris, we only have a couple of minutes left, so please transition us into CXL. We tabled that for a little bit, but, what's going on with CXL Consortium? Talk to us a little bit about the work group you lead and why the viewers of this panel should care.

26:39 CP: Sure. CXL or Compute Express Link, has been around . . .We've been incorporated for about a year now. We launched the 1.1 spec last year and we just announced in the past week, the 2.0 spec. Within a one year, we've been able to release an additional spec generation. And the organization, in general, is very, very healthy and has quite a lot of contributing companies in it and continues to grow very rapidly. Among other things, one of the areas that we're very focused on is, of course, memory, and I'm using that in the most broad possible way.

27:30 CP: One of the work groups that we have as a part of the consortium is a memory systems work group. That is a workgroup that I chair. Our focus and our charter is primarily to look at what are the potential use cases, the potential applications of memory devices on CXL, and how can we improve the interface to ensure that we have those use cases covered? As an example, one of the pieces that I alluded to in my presentation that we've recently released, is this management interface. This adds this abstraction layer for CXL memory devices that allows us to use the same driver, for example, and we can do things like collect error information, update firmware, monitor temperatures, all with a standardized interface. Regardless of the specific media, whether it's 3D XPoint or something else, we have that commonality. And from an end-customer perspective, that is very important for us, that really makes the integration and migrations much more seamless.

28:39 DE: And then what are the key things for any new interfaces? When is there going to be native hardware support for it? When do you expect CPUs that have native support for CXL to appear? And will that be . . . What version of CXL do you expect that to be, or is that still up in the air?

28:56 CP: Yeah. I won't be able to comment on specific products, of course, you'll have to go talk to our favorite CPU providers for that, but what I can tell you is that all of the major CPU providers are on the board of CXL. There are a number of product development efforts in flight, and I would expect to start seeing some interesting things happening next year.

29:20 DE: Yes. One of the things I've noticed is, in your work group, there is a lot of interaction between those CPU providers and the memory makers, and those are . . . Who are exploring even making the interface chips, the SoCs that are going to go in between memory and the CPU. That's great to see in the ecosystem.

29:39 DE: We're almost out of time, and I'm going to throw it back to Ginger because I think we tabled one question, which was, what is Intel doing to create this ecosystem to help customers move towards persistent memory? And like I mentioned, this is a question that came up even yesterday. Ginger, please go ahead and close us out with answering this.

30:01 TC: By the way, Marty says you can go up to 11:25 if you want.

30:05 DE: Oh, outstanding. Then, Ginger, we can stretch that out and you can get more time. And there are quite a few questions coming in to answer here. Go ahead.

30:13 GG: Yeah, yeah. I'll address the question of what is Intel doing to enable the ecosystem? Well, we have already built quite a strong portfolio of ISV vendors, OEMs and even virtualization technologies that already take advantage of persistent memory, whether in App Direct or Memory Mode, but there's still plenty of work to do, for sure. In some ways, especially with respect to the public cloud adoption, I know that's been a question, it's really a chicken and egg kind of debate.

Public cloud providers want us to show applications that are running well on persistent memory applications or vendors want to see persistent memory in the cloud before they adopt it, so we are struggling with that right now. And public cloud, of course, right now, the access is really kind of application specific. You can access SAP HANA instances in Azure, for instance. But we do expect that as time goes on and we have more of these really good examples of applications that run well on persistent memory, the overall persistent memory adoption as well as in the public cloud will continue to increase. It's just a little bit of a ramp-up.

31:32 DE: Great. Since we have a little bit more time, Marty has graciously given us more time, Jia, what's next for Oracle in using persistent memory in Exadata? You gave us some good examples of how it's used for OLTP in the caching and then also in the logging, but what's next?

31:51 JS: Yeah. As you know, Dave, the database market is very big. There are many different kinds of applications, and all we have zoomed in on is a very specific market, the OLTP market that we've talked about. And then many of us who are in the database world will be like, "Oh, what's going to happen with the analytics, the data warehousing workloads? What are you guys doing with the persistent memory there? What are the opportunities there as well?" And also just echoing, just using persistent memory as larger memory in the memory mode, because so far, we have been using it in the App Direct Mode inside the storage to build a hierarchical tiered caching, persistent memory beyond that very cream of the crop, at the very top, then you layer it with flash in the middle and a hard disk at the bottom.

32:40 JS: That's a very specific use case for storage that we felt like it's really strong. It's a huge differentiator, big disruptive changes to our Exadata story. But looking forward, we feel that there's many different places that we still haven't yet explored. These are the areas that we're actively looking at. And also, I wanted to say that, if you look at the database design, from the get-go, that was, I don't know, decades ago.

And then you look at the database design about building the index, the data structures are very storage friendly. It talks about how you can persist that. You never write a memory hash table and persist it into storage, because then it's just too hard. But now you have persistent memory, that really begs the question, do you really have to write a B-Tree index database? Those are the bigger questions that yet to be answered, but a lot of people like us are working on that.

33:36 DE: Jia, this one, maybe you're the right person to steer this to. There is a question here that said, "Can you have Optane DIMMs, 3D XPoint DIMMs and DRAM DIMMs on the same memory channel or would you have to put on two different memory channels?" Good question. Do you happen to know?

33:52 JS: Sure, Intel . . .Yeah, Intel population rule says, "Thou shall put a DRAM DIMM and a persistent memory DIMM on the same memory channel." That's how you populate it, even if you...

34:03 DE: So, you better put them both on the same channel.

34:05 JS: Exactly, yes.

34:06 DE: Okay, good. I think we've answered . . . Yeah, we've answered that question. Then there's one here also about public cloud providers and when you're using 3D XPoint in Memory Mode. I guess this would go to Ginger. And it says, "How is a public cloud provider supposed to know how many writes occurred and charge the user per write?" Boy, that seems like a very interesting question. That meter is running and is that each turn of the crank?

34:36 GG: Yeah, that's something... I don't have a great answer for that, unfortunately. But the way that memory mode works is the system sees your persistent memory as the memory in the system and it doesn't recognize DRAM separately. DRAM is just a cache for the persistent memory capacity. Any reads and writes to that are treated as if they're writing to memory. And I'm assuming that most public clouds don't have a way to meterize that kind of writes yet, so I think that would be probably more of a challenge if they wanted to add some meters to persistent memory writes. Sorry, I don't have a great answer for that.

35:18 DE: No worries. That's what we're here for, is to see who can answer or maybe not answer. I'm going to knock a couple of these out myself because I think I may know an answer, and then I'm going to steer one to Chris, so stay on point here, Chris.

But first, I'm going to say . . . There's one that says, "How much power do you expect the CXL interface chip that drives DRAM DIMMs to consume?" Well, what I'll do is I'll say I'll compare between DRAM DIMMs and CXL. With CXL, it is going to consume more power. We can see that even in the module itself. Keep in mind that DRAM DIMMs are going to be somewhere about 15-18 watts. That's been one of the challenges for Intel on Optane, is how to fit those DIMMs into that 15-18-watt envelope, because persistent memory does like to consume a bit more power.

Once you add in the PCIe NIC . . . Chris nodding here . . . Once you add in the PCIe interface, which is what the PHY is for CXL, that's going to consume more power. So, we can expect . . . And I've been using E1.S as a module example for persistent memory, we can expect that that's going to consume more than a DRAM DIMM, maybe up to around 25 watts, seems to be what most of the manufacturers are planning for. That's something to look ahead for.

36:28 DE: One other thing is said, "When do you expect NVDIMM-P products to be announced and what kind of competition would that pose?" Being independent, I'll take that one myself. I don't see the native support for NVDIMM-P coming on CPUs. Whereas NVDIMM-P appeared to have some more momentum a year or two ago, I think it's kind of lost some momentum to CXL and looking at CXL module. I'm still holding my breath if we see NVDIMM-P modules.

I think there was also . . . Chris made this point earlier, that mixing different types of memory on the same memory bus has some problems, so even managing that in the memory controller in the CPU, that gives it a problem.

Chris, let's throw this one to you. It says, "How do you see the latency of persistent memory behind CXL affecting software applications?" And that gets us into why CXL, Gen-Z, CCIX or OpenCAPI? And there are some latency differences there, but again, you're the CXL pitchman, tell us why CXL?

37:42 CP: Yeah. First of all, whenever we talk about latency . . . I think Yao actually alluded to this quite nicely. We have to look at things at the application level because that's typically the level that matters. So, at the end of the day . . .

37:58 DE: We have to think about Ben and his $1,000 going in. Thank you, Jia.

38:01 CP: That's exactly right. We have to worry about the end user here and we need to keep Ben happy. That's the goal in life. So, we have to look at things at that level. Whether or not the additive latency will make a difference is, of course, therefore application dependent. In many cases, there's an entire stack of latency that builds up, whether that's because of the software layers you have to go through the network hops, through the CPU's caching infrastructure, through the memory controller, and so forth. It depends, ultimately.

Now, more specifically, I would also argue it will depend on the specific media that you're comparing against. The additive latency may make more of a difference when you're comparing to something that's already low latency, like DRAM, for example. But on the other hand, if you're comparing it to another media like, say, 3D XPoint, it may only be a very small percentage relative to the 3D XPoint latency. And as a result, it may not be material in difference. Ultimately, the correct answer to look at is always application level, but it will also be media dependent.

39:13 DE: I see. Yeah. So, for us hardware guys that focus so much on latency, we've got to consider that software stack and how that impacts things.

39:20 CP: That's right.

39:20 DE: And then, as several of the speakers have made clear, we have to think about the workloads there as well.

Okay. I think it's going to be time to wrap up. And if you like this session, there's going to be more of it next year, and the Persistent Memory Summit is going to widen out . . . From SNIA . . . Is going to widen out, and also include computational storage next year. Look for that on April 20 and 21, I believe are the scheduled dates. It's a two-day virtual event, and I would expect to see many of these same speakers.

We're also going to try and answer all the questions that came in. That may take a day or two before we get to all those questions but thank you very much. Thanks for joining. And thanks so much for my panelists for being good sports and bringing this to life. Like I said at the top, anything could happen, and I think we've pulled off a pretty good session, so really, really appreciate your chipping in and joining me here. Thanks so much. Take care, guys.

40:15 GG: Thanks.

40:15 JS: Thank you. Thank you, Dave.

40:15 CP: Thank you.

Dig Deeper on Flash memory

SearchDisasterRecovery
SearchDataBackup
SearchConvergedInfrastructure
Close