Looking at NVMe storage technologies today and tomorrow
A comprehensive collection of articles, videos and more, hand-picked by our editors
NVMe protocols are growing in importance as enterprises switch to flash storage. Demartek founder Dennis Martin explains the technologies and why IT needs to keep an eye on them.
Nonvolatile memory express (NVMe) is coming to a storage system near you soon, and IT pros need to become familiar with the protocol.
That's the advice of Dennis Martin, president and founder of Demartek LLC, an industry analyst firm that operates an on-site test lab in Golden, Colo. Martin said it is time to learn about NVMe, NVMe over Fabrics (NVMe-oF) and other types of nonvolatile memory that will eventually replace or complement NAND flash.
NVMe benefits include higher IOPS per CPU instruction cycle, lower latency in the host software stack, and additional parallel requests. The initial use case for NVMe has been PCI Express-based solid-state drives (SSDs). NVMe-oF will extend the benefits of NVMe over a network.
NVMe SSDs use a streamlined command set that reduces the number of CPU instructions to process an I/O request compared to SAS and SATA drives. The NVMe protocol supports 64,000 commands per queue and up to 64,000 queues, whereas typical SAS drives support only 256 commands in a single queue, and SATA drives support just 32.
The NVMe 1.0 specification emerged in March 2011, and updates followed in November 2012 and November 2014. The NVMe-oF specification, published in June 2016, supports data-center fabrics such as Ethernet, Fibre Channel and InfiniBand.
In this podcast with searchSolidStateStorage, Martin explains NVMe and NVMe-oF and the impact the technologies could have on IT organizations. Brief interview excerpts follow.
For those who might not know much about NVMe, can you give a brief explanation?
Dennis Martin: NVMe is a very optimized, high-performance interface for storage -- specifically for solid-state storage. You cannot use hard drives with it. It's only solid-state storage. It's based on using the PCI Express bus as the interface. That's why the word "express" is in that NVM Express term. And, of course, NVM stands for nonvolatile memory, which is another name for solid-state storage.
For what types of enterprises and use cases is NVMe going to be a big deal?
Martin: NVMe is a big deal for anybody that needs very high performance and very low latency. Typically this is going to be an enterprise -- let's say something with big databases or transactional environments, where you have to get the answer back right away. However, it's not limited to enterprises, because NVMe works just fine in a desktop environment. You're starting to see a lot more motherboards on desktop machines, and I wouldn't be surprised to see it in laptops. So, it's available for just about everybody. In fact, NVMe is also moving into the mobile space, and it will become an interesting interface to use even in things like mobile phones.
Is there a price premium for NVMe solid-state drives?
Martin: Yes, the NVMe drives -- because they perform very highly -- are more expensive. And that's not surprising, given this industry. The really fast high-performing stuff is always more expensive than the slower, cheaper things.
An extension to NVMe known as NVMe over Fabrics is also becoming a hot topic of discussion. Can you explain what that is and why there's a need for it?
Martin: Because [NVMe] runs on the PCI Express bus, you're limited to the distance that you can extend the PCI Express bus either inside of a chassis or maybe outside of it with a cable. The second limitation is [that] there's only a small number of devices you can put inside of a PCI bus or inside of a server that has PCI buses in it.
NVMe over Fabrics is designed to alleviate those two conditions. NVMe over Fabrics allows you to go over much greater distances than just the local distance of a PCI bus inside a chassis or with a relatively short cable. And secondly, it allows the scaling up to very large numbers of devices.
Typically an NVMe over Fabric distance would be considered anything inside of a data center where you can run ... a [remote direct memory access] RDMA fabric. So, that would be InfiniBand, [RDMA over Converged Ethernet] RoCE, and [Internet wide area RDMA protocol] iWarp. Then there's a Fibre Channel fabric. NVMe can run over any of those. NVMe over Fabrics is also designed to run over new fabrics that might come along. There are others in development that will become popular.
Will the performance and latency advantages that NVMe over Fabrics can bring be significant enough for the average IT shop to notice?
Martin: For an average IT shop, if they have applications that are especially latency sensitive or that just need very high performance, they will notice a difference. The goal of NVMe over Fabrics -- and it's still a little early yet, so we haven't seen [many] implementations of it -- but the goal is that the fabric part of NVMe over Fabrics will add no more than 10 microseconds of latency beyond what you would get normally over NVMe. Ten microseconds is not very much.
What is the most important single piece of advice you'd offer to IT shops on NVMe and NVMe over Fabrics?
Martin: If you are not familiar with it, you need to become familiar with it because there's a lot of energy in the IT industry being put behind NVMe -- not just by the NVMe Express organization and its member companies. There's just a lot going there. So even if you're not ready to embrace it just yet, you need to be aware of it. You need to know what it's doing. It doesn't solve every problem, and it's not the answer for everything, but it's going to be the answer for a lot of things, especially as we move forward with flash arrays and flash kinds of things. And then as soon as we start to see other types of nonvolatile memory that eventually will replace NAND flash, or at least be that next layer between NAND flash and DRAM, it's going to become more critical, because as the flash media itself then gets replaced by something faster, then you're going to start to look at the rest of your infrastructure and look at where the latency is. And you're going to find it in the software stack, or maybe you might find it in some of the other types of older interfaces. So, you're going to want to be aware of what your choices are.
NVMe will be hot in 2017
Making storage networking faster
Why RDMA gives you an advantage with flash