BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
At the start of 2018, the storage industry is prepared for an influx of ultrafast, low-latency NVMe-based enterprise...
flash storage and even higher-performing new memory technologies.
Technology executives at leading vendors and storage analysts predict the industry will begin to shift to NVMe-based PCI Express SSDs in 2018 and eventually to enterprise flash storage arrays that support nonvolatile memory express over Fabrics (NVMe-oF) to extend the performance benefits across a network.
On the horizon, they also see new memory technologies, such as 3D XPoint from Intel and Micron, Samsung Z-NAND and other emerging storage class memory (SCM) options, becoming a major disruptive force in the industry. The lightning-fast memory technologies could spur a new wave of software options and storage rearchitecting to address bottlenecks that have shifted from hardware to legacy software stacks.
Below are predictions on enterprise flash storage, NVMe technologies and emerging persistent memory options that SearchStorage collected from industry experts.
What's up with NVMe and NVMe-oF?
Danny Cobb, corporate fellow and vice president of global technology strategy at Dell Technologies: The promise of NVMe over Fabrics is to get very low overhead, very low latency [and] high throughput I/O over a data center fabric. And there are options for Fibre Channel, Ethernet, InfiniBand, etc., and even options for plain old TCP being standardized. But, right now, it's still a bit of the wild, Wild West.
At the Flash Memory Summit last year, there were a lot of vendors showing demonstrations and proofs of concept, and virtually none of them would interoperate with anybody else's. You still had to sort of cherry-pick the individual parts to get your pipeline working. So, 2018 will still be a year of tire-kicking and getting the solution stack to be more robust. But we will see some proof point of what extending the NVMe storage primitives out onto the data center fabric will bring in the future. The data path will come along.
Enterprise solutions, like security and multipathing, and other capabilities will mature and start to interoperate across the vendors of HBAs [host bus adapters] and switches and things like that.
In 2018, for the first time, we'll see cost parity with other storage devices, such as SAS, which is one of the interconnects that NVMe will probably be disrupting. So, customers will move from waiting for NVMe to become more affordable to starting to deploy it more broadly as a standard part of virtually every storage platform.
Randy Kerns, senior strategist and analyst at Evaluator Group: The transition to NVMe in storage systems on the back end, talking to NVMe devices, will become a major competitive point for vendors. In the first half of the year, a number of high-profile vendors will have product offerings. It is a significant performance gain. But while the noise around NVMe over Fabrics will increase dramatically, enterprises will be much slower to get on board.
Russ Fellows, senior partner and analyst at Evaluator Group: Most end users are educated about NVMe and the benefits, and they understand that it's something they want in their next-generation systems. They're going to start asking for it now. But on the front end, vendors are way ahead of the customers, and the technology just isn't proven enough. So, very few people are going to try and implement end-to-end NVMe this year.
J Michel Metz, R&D engineer for storage networking and solutions at Cisco: We're probably going to see more interest in Fibre Channel for NVMe-based products, because Fibre Channel provides quite a bit of reliability for customers who are risk-averse.
But we're also going to see some small hyper-convergence deployments that use Ethernet-based NVMe.
Enterprise flash storage systems
George Crump, founder and president of Storage Switzerland: Hybrid storage lost its luster last year, as the world went gaga over flash. I think we'll see a resurgence of hybrid primary storage in two areas. We'll see a hybrid flash phenomenon where we're using a small amount of NVMe, or potentially even some nonvolatile RAM technology caching or tiering, to a high-capacity SAS-based flash tier.
And we'll also see a slight resurgence in the traditional hybrid storage of flash and hard drives as a backup or secondary storage system, where instead of replicating for disaster recovery purposes from a flash box to a flash box, replicate from a flash box to a box with flash and hard drives.
Eric Burgener, research director for storage at IDC: For 2017, more than 98% of the industry all-flash array (AFA) revenue was driven by SCSI-based arrays. But by 2021, NVMe will replace SCSI as the protocol of choice in enterprise-class arrays, and end-to-end NVMe systems -- which include not just NVMe controllers, backplanes and devices, but also NVMe-over-Fabric host connections -- will drive 25% of all AFA spending.
NVMe will penetrate first in real-time big data analytics environments and extremely latency-sensitive primary workloads, but eventually open up opportunities for denser storage workload consolidation. We think, by 2020, 70% of Fortune 2000 companies will have at least one real-time -- as opposed to batch-oriented -- big data analytics workload that they consider mission-critical, a significant departure from the past, when very few big data analytics platforms were considered mission-critical.
Greg Wong, founder and principal analyst at Forward Insights: The NVMe enterprise SSD market will exceed the SATA enterprise SSD market by the end of 2018, driven by increasing adoption by OEMs and data centers transitioning from SATA SSDs.
Disruptive new memory technologies
Cobb: Fast storage meets up with big memory, and the software ecosystem will react to figure that out. The advancement of the semiconductor industry from flash to very fast flash -- what Samsung and others would call Z-NAND and what Intel and Micron are doing with 3D XPoint, both as a storage device and main memory for processors -- for the first time are coming together.
The new class of memory moves from the lab into a production platform this year. And toward the end of the year, it starts impacting the DRAM and memory world and the software that's written to manage memory versus storage. It's going to be every bit as disruptive as multithreading and multicore processors were for the past decade or so, and we're just at the beginning of it.
Like with many of these transitions, the first place the technology will show up is in those platforms where there's a more controlled ecosystem -- for example, in a storage array, where we don't need broad interoperability across dozens of vendors. Then, it will just start to migrate out across the whole IT infrastructure.
Milan Shetti, general manager for storage and big data at Hewlett Packard Enterprise: During the last 10 years, we have seen massive retooling of data centers, migrating storage arrays from spinning drives to flash storage. We are about to see a similar tectonic shift, as new media and protocols start to emerge. Customers will stand to benefit as the combination of storage class memory and NVMe slowly starts to seep into the data center.
2018 will be a year of early adopters, as the high cost of SCM will be a big hurdle for mass adoption. But stay tuned. We will see every storage vendor announce offerings to satiate this demand. There will be multimillion-dollar investments from the [venture capital] VC community. And as with any fast-break technology, only the strong will survive.
Metz: We're going to start to see an uptick for memory technologies like Samsung Z-NAND, QLC NAND flash, Optane and storage class memory. But the real question will be how quickly people understand the best places to use them. I've already heard some people talking about using persistent memory for normal memory operations that are over a network, which completely defeats the purpose of how this stuff is supposed to work. A little bit of education is going to have to come out to change the Wild West for persistent memory.
Mark Bregman, CTO at NetApp: For a long time, we've tried to reduce the latency of storage. We've gone to solid-state disk and new protocols like NVMe. The next trend is going to demand that we rearchitect our solutions to eliminate the protocol bottleneck between compute and storage.
New persistent memory architectures that take advantage of solid-state technology intimate with the compute will allow ultralow latency computing. But, at the same time, that nonvolatile or persistent memory will be managed as part of the storage system to avoid data loss, to allow copy management and all the things we've come to rely on in managing data. And that's going to allow people to find new ways to use very near-real-time data analytics to drive business opportunities.
Evangelos Eleftheriou, fellow at IBM: In-memory computing will accelerate various machine learning [and] deep learning workloads by orders of magnitude and will become an integral component of cognitive computing systems.
This new field will see accelerated progress in 2018, in particular, as phase-change memory matures. Because of its low power consumption, computational memory will also be crucial for IoT [internet of things] and mobile platforms for inference at the edge, traversing speech, vision and NLP [natural language processing] workloads.