Flash Memory Summit attendees spent the second day of the virtual conference learning about the problems holding up persistent memory adoption and PM use cases, how flash is revolutionizing the future of cloud and enterprise storage, NVMe, and more
The 13th annual Flash Memory Summit Best of Show Awards and the SuperWomen in Flash Leadership Award ceremonies also took place.
Whether you made it to the live conference or missed it, in this content library from day two, see what was unveiled and deepen your knowledge.
Looking for more on-demand content? Keynote presentations can be found here, session presentations from day one are located here and you can click here to access the presentations from the final day of the conference.
NVMe and NVMe/oF
Session A-5: Accelerating Applications for a Competitive Edge
Storage developers and IT professionals should know about the many methods that are available for accelerating their storage systems to meet the demands of larger data streams, new applications (such as artificial intelligence and IoT), and extra requirements such as cybersecurity, privacy and scalability. Such methods include Ethernet-attached SSDs to avoid protocol conversions, use of multi-level flash to provide lower-cost archiving, and the implementation of Zoned Namespaces to allow the automatic tailoring of flash storage management to suit particular applications.
How Zoned Namespaces Improve SSD Lifetime, Throughput and Latency
The NVM Express Zoned Namespace (ZNS) Command Set interface is a new command set specification that defines the zoned storage interface for NVMe SSDs, allowing the storage device and host to collaborate on data placement. Compared to a typical SSD, a ZNS SSD can expose more of the physical media and has significantly higher steady write throughput, while improving I/O access latencies. This talk introduces the NVMe ZNS Command Set, the zoned storage model and the software ecosystem, and it provides an update on each. Furthermore, we show the benefits (lifetime, throughput and latency) of ZNS SSDs using real-world workloads and comparing them to typical SSDs. Link to Presentation
Software-Enabled Flash for Hyperscale Data Centers
To obtain the greatest efficiency at scale, the hyperscale data center is redefining digital storage. Using flash effectively and efficiently for stable and predictable latency is critical to achieving this goal in ever-changing cloud workloads. Software-enabled flash is a technology that redefines digital storage in a new way that combines software flexibility, host control and flash-native semantics into a flash-native API. Together with purpose-built flash-based modules, software-enabled flash maximizes the value of flash memory for large-scale cloud providers. This combination of technologies fundamentally redefines the relationship between the host and solid-state storage in a way that bypasses the overhead of legacy HDD storage to enable the use of flash at its natural speed under host control. This approach provides a pathway to flash-native architectures that include block, ZNS, custom flash-native implementations and much more. Link to Presentation
Making NVMe Drives Handle Everything from Archiving to QoS
Zoned Namespaces (ZNS) represent the first step toward the standardization of open-channel SSD concepts in NVMe. Specifically, ZNS brings the ability to implement data placement policies in the host, thus providing a mechanism to lower the write-amplification factor, lower NAND overprovisioning and tighten tail latencies. Following its counterpart in the high-capacity HDD world -- shingled magnetic recording HDDs -- ZNS is initially targeted at archival workloads. We believe that ZNS can also address the other core concept in the open-channel SSD architecture: I/O predictability. In this talk, we cover how the existing ZNS architecture can be suited for better host data placement and scheduling than traditional block I/O. Moreover, we introduce the concept of Zone Random Write Area and how it helps ZNS target multi-tenant environments. Link to Presentation
Accelerating Flash for a Competitive Edge in the Cloud and Beyond
NVMe's Zoned Namespaces functionality has created a large array of new opportunities for end users. This radical approach to NAND SSD data placement and management enables key technologies such as QLC to become mainstream, while also giving applications and users a better method to optimize and streamline operations, resulting in higher performance and longer life for flash. This talk addresses what these changes mean for real-life applications focusing on the most popular NoSQL databases, which are largely used in the cloud and many modern applications, spanning from Cassandra to MongoDB to Heterogeneous-Memory Storage Engine. The session discusses the results of benchmarks (such as YCSB and Socialite) that most represent real-life applications and uses the benchmarks to highlight the value and new opportunities that these new storage technologies bring to the market. We also cover different topologies and configurations like NVMe-oF to provide a thorough view of what this technology means for the real world. Link to Presentation
Session D-6: Optimizing NVMe-oF Storage with EBOFs and Open Source Software
Data-driven applications such as databases, analytics and AI/ML run much faster on multi-server systems when the available flash storage is networked to maximize usage. However, an obvious problem is how to network standard NVMe drives at reasonable cost while achieving high performance levels. One emerging solution involves creating an NVMe-oF Ethernet Bunch of Flash combined with the open source Heterogenous-Memory Storage Engine software. Trial runs show good performance with multiple workloads. Link to Presentation
Session B-5: Persistent Memory (PM) Offers Storage at Memory Speeds
This session presents three areas of persistent memory: What is available today and what persistent memory can do for you; use cases showing how persistent memory is changing applications and infrastructure; and perspectives on technology and market directions.
Annual Update on Persistent Memory
This session is a state of the union address for the industry effort underway to deliver this disruptive technology -- persistent memory (PM). We examine industry advances of persistent memory media, the new devices and form factors for persistent memory attachment, remote and direct-attached PM with low latency interfaces such as CXL, and describe the best-fit applications and use cases for persistent memory. Link to Presentation
Speed Dating With PM Application Developers
Dive into a rapid-fire presentation on three examples of how persistent memory is changing the landscape in appliances, infrastructure and applications from the perspective of a memory startup, a social networking company, and a cloud and enterprise software provider. These three cases highlight the motivation for using PM and the delivered results. Link to Presentation
Technical and Market Directions for Persistent Memory
In this session, we look ahead to how persistent memory technology is evolving, including maximizing performance in next-generation applications, and PM market growth projections. Link to Presentation
Session B-6: How Can We Solve the Problems Holding Up Persistent Memory Adoption?
Many reasons have stopped persistent memory from gaining wider usage. Key ones include high cost, lack of high-performance processors and buses, limited system software support, and dependence on relatively expensive non-flash technologies. Breakthroughs are beginning to appear in all these areas, and systems using persistent memory are becoming far more common. Link to Presentation
Session C-5: IDC Enterprise/Cloud Storage: Part 1
Flash memory is revolutionizing cloud and enterprise storage. It is contributing toward the digital transformation that involves the management of an increasingly diverse application and data portfolio extending from edge to core. Organizations must scale their IT environments while supporting a digital infrastructure that delivers strategic marketing advantage. New technologies such as flash, new paradigms and new operational models are all needed to meet digital business challenges. Meanwhile, cloud services are moving into production applications and the resulting cloud workloads require flash memory to provide the required throughput, latency and ROI.
Introduction to Enterprise/Cloud Storage
This presentation explores the evolution of flash in the enterprise over years and provides insights into how emerging technologies have impacted the market. Link to Presentation
Flash in the Era of Digital Infrastructure: What CIOs Need to Know
This presentation discusses the ways in which flash plays a key and enabling role in accelerating the adoption of digital infrastructure. In this session, get a review of the systems, platforms and technologies that CIOs should have on their radar for eventual adoption. See how the future of digital infrastructure is distributed -- a core-edge-endpoint continuum. Additionally, the presentation covers the changing nature of applications and workloads, and addresses infrastructure for time-to-value workloads such as AI and massively parallel computing. Finally, the session looks at the shift from hyperconverged to composable and disaggregated infrastructure, then goes into edge computing, and concludes with essential guidance for CIOs and IT decision makers. Link to Presentation
Software and Services Define Future Infrastructure Consumption
While initial growth of public cloud was driven by startup and developer use cases, public cloud is becoming a consideration for nearly all production applications today. In this session, IDC research director Deepak Mohan discusses these growth trends, the drivers behind this increased acceptance, and implications. Link to Presentation
Cloud Workloads Driving Flash Adoption
The growing adoption of cloud by enterprise IT has been marked by a corresponding increase in flash use in cloud deployments. This presentation discusses the top cloud workloads driving flash use in cloud deployments, both from a volume and a growth perspective. In this session, IDC research director Kuba Stolarski also touches on insights into how and why cloud system builders are using solid-state storage, including results from IDC's cloud infrastructure index research. Link to Presentation
Session C-6: IDC Cloud Storage: Part 2
SSD storage volumes have been a common feature in public clouds for several years. Many service options are now widely available. New developments and issues include hybrid SSD storage options and the tradeoffs to consider when selecting among consumption models.
Evolution of SSD Storage in Public Cloud
SSD storage volumes have been a common feature in public clouds for several years. Many service options are now widely available. New developments and issues include hybrid SSD storage options and the tradeoffs to consider when selecting among consumption models. Link to Presentation
Session C-7: IDC Enterprise Solid State Storage: Strategies and Futures
SSDs have helped transform the enterprise through their use in servers, storage tiers and all-flash arrays. The technology continues to advance rapidly with the introduction of NVMe, persistent memory, storage class memory, QLC media and computational storage. Such advances will surely impact enterprise storage systems and the cloud. Recently, emerging solid-state memory technologies have become available as system options. They are helping enable the digital transformation now underway in most enterprises. For example, end users will encounter new issues as they transition to NVMe-based technologies. Meanwhile, recovery requirements have become more stringent, and more data protection platforms and processes are leveraging flash media. End users are employing flash to meet new and changing data protection requirements.
Solid-State Storage Developments
Whether it is in the server, used as a tier of storage, or an all-flash array, solid state is a technology that has already transformed the enterprise. Yet the technology is still advancing at a rapid pace with the introduction of NVMe, persistent memory, storage class memory, QLC media and computational storage. This is poised to impact enterprise storage systems and the cloud. In this session, IDC research VP Jeff Janukowicz explores the trends affecting the market and factors influencing adoption of these technologies and reveals IDC's outlook in the context of these ever-changing dynamics. Link to Presentation
Relevant New Storage Technologies for Digitally Transforming Enterprises
Just within the last two years, a number of new capabilities, based on emerging solid-state memory technologies, have become available as systems options. IDC research VP Eric Burgener reviews a number of these (including many of the component-level technologies covered in Jeff Janukowicz's prior session), discussing how vendors are leveraging them, what they mean for customers, and how they are helping to enable the digital transformation that is under way at the majority of enterprises today. He then provides advice for end users to keep in mind as they transition to NVMe-based technologies. Link to Presentation
IDC Closing Remarks
In this presentation, Eric Burgener provides closing comments about solid-state technologies and how you might consider using those in your own data center and as you build your infrastructure. Link to Presentation
The Role of Flash in Enterprise Data Protection
As enterprises undergo digital transformation, data protection has continued to evolve. As recovery requirements become more stringent, more data protection platforms and processes are leveraging flash media. In this session, IDC research director Phil Goodwin discusses how customers are using flash today to meet their evolving data protection requirements, and the business benefits this approach brings to the table. He also discusses impending future developments at the intersection of flash and data protection. Link to Presentation
Session C-8: IDC Real-World Applications and Solutions for Persistent Memory
Persistent memory (storage at memory speeds) seems like an obvious gain for most applications. However, progress in the area has been quite slow because of the lack of standards, suitable interfaces and systems software. Customers have bought into the latest entries from companies such as Intel in search of needed performance boosts at reasonable cost. New hardware and software should lead to even further market penetration in a variety of applications, including AI/ML, real-time analysis, high-performance computing, virtual and augmented reality, IoT and cybersecurity. Link to Presentation
Session D-5: Storage Processors Accelerate Database Operations
Storage processors offload legacy software to high-performance hardware. Legacy storage stacks waste flash capacity and performance, forcing users to deploy expensive, overprovisioned SSDs. For example, high write amplification limits low-cost technologies such as QLC and PLC to low-performance applications. Storage processors add value for many data-intensive applications, including basic block storage benchmarks as well as database applications. An important point is that they work with industry-standard SSDs. Link to Presentation
Session A-6: Which SSD Is Best for Your Application?
NVMe SSDs are now mainstream in most data center applications. But how about applications that require higher performance, lower latency, more flexibility, easier management, or more scalability? For them, a variety of choices now exist. There are Ethernet SSDs, NVMe-oF SSDs, Optane SSDs, MRAM SSDs, zoned namespace (ZNS) SSDs, key-value SSDs, computational storage SSDs, and other variations. How do storage designers determine when their applications require something other than the standard device -- and which one would do the job best? Obvious use cases have emerged for some variations, while still others are currently a solution waiting for a really good problem.
Session B-7: Ethernet-Attached SSDs Lead to Higher-Performing Storage
Ethernet-attached SSDs offer a simple way to increase storage performance and reduce cost. They are an upgrade to current designs based on PCIe switches, Ethernet NICs and compute modules used to do protocol conversions such as NVMe-oF on Ethernet to NVMe on PCIe. The new designs replace PCIe switches with cheaper Ethernet switches, and make the compute modules (a common limiting factor in performance and scalability) unnecessary. JBOFs using Ethernet-attached SSDs have been constructed, and the resulting systems show excellent throughput. The same method could be used to provide a simple, inexpensive interface for persistent memory as well.
Evolution of Ethernet-Attached NVMe-oF Devices and Platforms
As the NVMe-oF ecosystem continues to mature, storage systems now have design choices for the type of external and internal fabrics to use, as well as the various attach points. Several Ethernet-based protocols (RoCE, iWARP, TCP) have also emerged as design choices for storage fabrics and interfaces. There are unique properties and trade-offs associated with those choices, as well as different implementations, acceleration paths and management models. An industry discussion has ensued on how close to the storage the Ethernet I/F should be carried: data center, rack or even device. This presentation provides a broad view of Ethernet-attached NVMe-oF disaggregated storage systems and what it would take for successful wide deployment of those systems. Link to Presentation
NVMe at Scale: A Radical New Approach to Improve Performance and Utilization
Explore how applications ranging from high-end native cloud to traditional enterprise can transform by making the right infrastructure choices that enable NVMe to scale, with minimum limitations. This session introduces a radical new architecture for disaggregating compute and storage with simple building blocks. The modern Ethernet Bunch of Flash architecture combined with mature TCP and growing NVMe-oF adoption can improve use of compute and storage resources and deliver performance at scale. Learn how new use cases from HPC to next-gen HCI can leverage this new paradigm of accessible performance without completely rearchitecting your data center. Link to Presentation
Benefits of Native NVMe-oF SSDs
NVMe over Fabrics enables large numbers of NVMe-based storage arrays to be networked, helping to improve asset utilization, increase application performance and reduce Capex and Opex. Currently the most common implementation approach being considered is at the storage array level. An alternative and more cost-effective approach is to deploy NVMe-oF natively at the SSD level. This approach can yield greater scalability, performance, reliability, availability and serviceability while helping to contain compute and memory investment costs. This approach also overcomes the limitation of PCIe by leveraging market-proven Ethernet-based networks. Link to Presentation
Session D-7: New Ways to Improve SSD Management and Performance
As NVMe becomes more entrenched in data centers, new methods have emerged for improving its management and performance. Approaches such as I/O determinism allow system software to vary underlying placement algorithms to suit particular approaches. There are also new ways to monitor the health of SSDs and to optimize their performance using AI methods.
Optimizing NVMe Drives for Your Applications
IBM's Zurich Research Laboratory developed SALSA (Software Log Structured Array), a flexible flash translation layer that aggregates and manages a group of inexpensive SSDs. It lives up to its name by adding a bit of spice to the managed SSDs. SALSA has a robust and simple LSA that allows for efficient garbage collection and reclamation, and in doing so can provide reduce write amplification to ensure higher endurance and performance. This talk explains how SALSA has evolved to work effectively with Zoned Namespaces. Link to Presentation
Monitoring the Health of NVMe SSDs
NVMe technology was built from the ground up for SSDs, and the original NVMe specification included a standard SMART (Self-Monitoring, Analysis and Reporting Technology) log that monitored errors, device health and endurance. Many capabilities have been built into NVMe technology since, including enhance error reporting, logging, management, debug and telemetry. These capabilities can be built into tools ranging from open source management tools to OEM management consoles to help support monitor the status and health of the SSD (such as notifying users when an SSD failure occurs). Link to Presentation
Optimizing SSD Performance With AI and Real-World Workloads
The best way to optimize SSD performance is to use real-world applications, but how should designers analyze the effects of attempts to improve wear leveling, task scheduling, pre-fetching and caching and to minimize write amplification and garbage collection? Furthermore, how can they predict I/O stream traffic? The answer is to use AI techniques to develop a model from the results obtained using the workload data. The model learns from the results presented to it during training and then optimizes performance during execution. This talk presents shows the use of these methods on advanced XL-flash SSDs in which a recurrent neural network serves as the underlying model and produces substantial performance improvements. Link to Presentation
Session B-8: Top Five Ways Ethernet SSDs Can Improve Storage Solutions
Ethernet-attached SSDs offer a simple way to increase storage performance and reduce cost. They are an upgrade to current designs based on PCIe switches, Ethernet NICs, and compute modules used to do protocol conversions such as NVMe-oF over Ethernet to NVMe on PCIe. The new designs replace PCIe switches with cheaper Ethernet switches, and make the compute modules (a common limiting factor in performance and scalability) unnecessary. JBOFs using Ethernet-attached SSDs have been constructed, and the resulting systems show excellent throughput. The same method could be used to provide a simple, inexpensive interface for persistent memory as well. Link to Presentation
Session D-8: Using the New EDSFF (E3) SSDs Effectively
The new EDSFF form factors (such as E3.S) allow higher system density and more effective forms for high-performance racks. They also provide a common connector for GPUs and advanced network interface cards. Builders of servers, all-flash arrays and storage appliances can use them to develop systems with unparalleled flexibility, allowing a single configuration to serve a variety of applications such as databases, AI, augmented and virtual reality, video and image processing, IoT and cybersecurity. They can also handle PCIe 5.0, CXL, 100GbE and other emerging interfaces. They offer higher power budgets and better signal integrity than the 2.5 form factors built to fit a widely used hard drive size. New development vehicles are also available to help designers implement their systems rapidly. Link to Presentation
Session A-7: New High-Speed Interfaces for Persistent Memory and Coprocessors
CXL and Gen-Z provide high-speed interfaces for connections inside systems and system-to-system, respectively. CXL supports persistent memory, coprocessors (such as GPUs and AI chips), and accelerators (such as FPGAs). It offers a single link for I/O, cache and memory (including coherency); widespread support from major equipment makers, and the ability to meet the needs of emerging applications such as AI/ML and cloud computing. Gen-Z provides high-speed, low-latency access to data and devices via direct-attached, switched or fabric topologies. It uses memory-semantic communications to move data with minimal overhead. It delivers maximum performance without sacrificing flexibility and offers built-in component-level security.
CXL: A Basic Tutorial
Get an introduction to Compute Express Link (CXL), a new breakthrough high-speed CPU interconnect that enables a high-speed, efficient performance among the CPU, platform enhancements and workload accelerators. Link to Presentation
The New Face of High-Speed Interfaces
Compute Express Link (CXL) is a high-speed CPU-to-device and CPU-to-memory interconnect designed to accelerate next-generation data center performance. This presentation provides an update on the latest advancements in CXL specification development, its use cases and industry differentiators. Learn how CXL technology allows resource sharing for higher performance; reduces complexity and lowers overall system cost; permits users to focus on target workloads as opposed to redundant memory management; builds upon PCI Express infrastructure and supports new use cases for caching devices and accelerators, accelerators with memory, and memory buffers. The CXL Consortium has released the CXL 1.1 Specification and the next generation of the spec is currently under development. Consortium members can contribute to spec development and help shape the ecosystem. Link to Presentation
Gen-Z: An Ultra-High-Speed Interface for System-to-System Communication
As the memory storage and processing ecosystem continues to evolve and grow, the open standard specifications developed by Gen-Z enables robust, secure and composable memory-semantic fabrics for expansion and disaggregation of memory and processors. The Gen-Z interconnect allows shared and provisioned access to Gen-Z attached memory from any of the compute elements (e.g., CPU, GPU, FPGA). This avoids the time and power consumption associated with current methods that require moving data to the compute memory before applications can be executed. This necessary work is helping to shape the ecosystem and demonstrates the importance of the memory fabric, as Gen-Z attaches to external (and internal) pools of resources such as memory, accelerators and NICs. In this presentation, learn about how Gen-Z will continue to shape the future of the data center and explore Gen-Z's potential impact for memory-centric computing. Link to Presentation
Session A-8: Where the New High-Speed Interfaces Fit and How They Work Together
CXL and Gen-Z are new interfaces that allow for the ultra-high-speed connection of all kinds of new technologies including accelerators, coprocessors, memories and other devices. They support various types of persistent memory, coprocessors (including GPUs and AI chips), and accelerators (including FPGAs and more advanced devices). CXL is intended for connections within a system, whereas Gen-Z serves system-to-system connections. Both offer high performance, flexibility and security. They can meet the needs of such applications as real-time analytics, AI/ML, AR/VR and high-performance computing. Link to Presentation
SuperWomen in Flash Leadership Award
The SuperWomen in Flash Leadership Award is a new award that began in 2018 honoring a woman who has demonstrated outstanding leadership in the flash memory technology ecosystem. Link to Presentation
Virtual FMS Show Awards
The 13th annual Flash Memory Summit Best of Show Awards represent industry recognition of a company's products and solutions. Link to Presentation