michelangelus - Fotolia

Get started Bring yourself up to speed with our introductory content.

Storage class memory could benefit VDI storage, remote work

SCM has the potential to meet IOPS and latency requirements of VDI workloads, but it isn't clear if it's worth the extra cost. We look at some possible ways this could happen.

Storage class memory promises to improve performance for various applications. Given the role flash has played in improving virtual desktop infrastructure's performance, some IT pros may be wondering if VDI could benefit from a move to SCM. This question is particularly significant as enterprises ramp up VDI capabilities to support more remote workers in response to the global pandemic.

SCM is a young technology, and there has been little discussion about its potential use for VDI storage. In addition, SCM can be implemented in multiple ways, and it's not clear which approach might best serve VDI workloads.

Organizations deploy VDI to reduce the costs and complexities that come with managing enterprise desktops. However, VDI has challenges, especially when it comes to storage. A VDI platform requires storage that meets the performance demands of a virtual desktop, regardless of fluctuating workflows and desktop usage patterns. Inadequate storage can negatively affect performance and productivity and lead to poor user experience.

Determining VDI storage requirements

A storage system can't meet VDI performance demands if IOPS are too low or latency too high. The system must be able to accommodate I/O storms, backup operations, software updates, antivirus scans and unpredictable workflow patterns. If it can't meet these demands, the VDI project is destined to fail.

When planning VDI storage, an IT team must consider many factors, such as the number of desktops, amount of data, supported applications and type of end users. Knowledge workers, for example, might generate greater IOPS loads than task or productivity workers, and workers who perform mission-critical operations require extremely low latencies. In addition, the storage system must accommodate variable workloads, while being able to scale when needed.

A variety of storage technologies can support VDI workloads, including all-flash storage arrays, hyper-converged infrastructures (HCIs), Fibre Channel SANs and software-defined storage (SDS). But now they have another option -- SCM -- which promises to increase IOPS and reduce latency, just what VDI calls for.

SCM devices are byte addressable and nearly as fast as dynamic RAM (DRAM). Unlike DRAM, however, SCM devices are nonvolatile, retaining data when power is lost. At the same time, SCM supports block-level access like a NAND drive, but it's much faster, resulting in more possible use cases. Currently, SCM technologies support three primary use cases: SCM as storage cache, SCM in place of NAND and SCM in the server memory space.

SCM as storage cache

SCM could potentially benefit VDI as a caching layer in an all-flash array, replacing more expensive DRAM. SCM is nearly as fast as DRAM, but cheaper to implement, especially at scale. In this way, the caching layer can support more data and retain data when power is lost.

Storage caching can help accelerate I/O performance and reduce read latency, making it well suited to handle VDI storage demands, such as morning boot storms. For example, Hewlett Packard Enterprise (HPE) offers Memory-Driven Flash for its 3PAR and Nimble Storage products. HPE Memory-Driven Flash uses Intel Optane DC SSDs as the caching layer. The Optane SSDs are based on 3D XPoint, an SCM technology developed by Intel and Micron.

According to HPE, SCM-based storage arrays can read data 10 times faster than traditional all-flash arrays, delivering ultralow latency for mixed workloads, including databases and other data-intensive applications.

HPE doesn't claim its Memory-Driven Flash arrays can benefit VDI workloads, although the company does recommend its traditional 3PAR and Nimble Storage products for VDI. 3PAR StoreServ, for example, uses DRAM for its caching layer, which can be extended into the flash SSDs for application acceleration. Perhaps Memory-Driven Flash could be used in the same way to improve virtual desktop performance.

3 SCM use cases with VDI potential

Pure Storage has also integrated Optane DC into its FlashArray//X systems to provide an SCM-based cache. Like HPE, Pure Storage makes no specific claims about FlashArray//X and VDI, but the vendor does say that its systems are well-suited for read-intensive and latency-sensitive applications and ones that use DAS, all of which can apply to VDI. While the vendors don't confirm SCM's advantages for VDI, the technology's capabilities suggest it would be a viable approach to improving VDI storage performance.

SCM in place of NAND

Another emerging use case is replacing NAND flash drives with SCM drives. The SCM devices could be used on their own or in conjunction with flash SSDs in a hybrid configuration, similar to how SSDs and HDDs are combined to create a hybrid storage product. Although SCM drives cost more than NAND, they deliver greater IOPS and lower read and write latencies, making SCM worth the investment in some scenarios.

One of the advantages of an SCM device is that data is written in place, unlike NAND flash, which requires cells be erased before data can be written. As a result, SCM drives can deliver faster write operations and lower write latency, while providing greater endurance.

Although SCM drives cost more than NAND, they deliver greater IOPS and lower read and write latencies, making SCM worth the investment in some scenarios.

Several vendors offer SCM SSDs, including Dell EMC, Intel, Kioxia (formerly Toshiba Memory) and Micron. Intel has been at the forefront of this effort with its Optane DC drives. The drives support PCIe and NVMe and are available in U.2, M.2 and add-in-card form factors. Compared to flash SSDs, the Optane DC drives offer faster and more reliable performance, with latencies typically less than 10 microseconds (μs). In comparison, NAND latency runs between 10 μs and 100 μs. The SCM drives also maintain low latency rates more consistently as load levels increase.

According to Intel, its Optane DC drives are suited to SDS and HCIs, two data center technologies that play key roles in VDI deployments. In fact, Intel states that Optane SSDs can benefit VMware vSAN, an SDS approach for HCI environments. In the early days of hyper-convergence, VDI was commonly cited as one of its primary use cases, and it's still considered a top reason for choosing HCI.

As with storage cache, it will take more than conjecture to conclude SCM is a viable alternative to NAND flash for VDI workloads, but it shows enough promise to warrant consideration.

SCM as server memory

Despite the performance advantages that SCM SSDs can offer, communicating across the PCIe bus still adds overhead. Because of this, some workloads might benefit from implementing SCM directly in the server memory, alongside the DRAM modules. SCM isn't as fast as DRAM, but it can persist data and is byte-addressable and cheaper, making it more affordable to scale out the server's memory.

Using SCM in conjunction with DRAM extends the server's memory to better support memory-intensive applications. In this way, more data can be processed in memory, resulting in performance beyond what's possible on PCIe-connected SSDs. Less data moves between the server's memory and the storage device, and the server performs fewer paging and swapping operations. The result is lower latency and higher IOPS.

As with the Optane DC SSDs, Intel has been at the forefront of the effort to implement SCM as server memory. The vendor has a line of Optane DC persistent memory modules (PMMs), which plug into standard DIMM slots, alongside conventional memory, such as DRAM. The Optane DC PMMs support capacities up to 512 GB, with latency rates as low as 350 nanoseconds (ns). In comparison, DRAM latency runs between 80 ns and 100 ns, and Optane DC SSD latency comes in around 10 μs.

Intel recommends its Optane DC PMMs for supporting virtualization and VDI, along with other types of applications. The modules make it possible to consolidate servers and support more VMs per server, while accelerating VM storage. However, to take full advantage of the PMMs, applications must be modified.

5 top reasons to deploy VDI

SCM and the VDI workload

It's possible that SCM could be used in place of DRAM to support applications that must be up and running quickly in the event of a server restart. In most cases, SCM as server memory would be used primarily to complement DRAM to extend memory and provide persistence. In this way, you can maximize application performance, while still reducing costs. At the same time, SCM will likely continue its trajectory as storage cache and as a NAND replacement.

It's early days for SCM technology, and the industry is dynamic. Most of the initial focus has been on SCM as storage cache or as a flash replacement, but with the release of the Optane DC PMMs, the picture is changing. IT teams must decide whether SCM is worth the additional investment to support storage for VDI workloads. Much will depend on how the SCM industry evolves and whether SCM performance can justify its cost. SCM's use will also depend on the VDI workloads themselves and whether the COVID-19 pandemic continues to require a large remote workforce.

Next Steps

Avoid disastrous bottlenecks, scale storage for VDI with HCI

Planning your HCI storage needs is vital for VDI

Top 4 challenges of hyper-converged infrastructure for VDI

Dig Deeper on VDI storage

Start the conversation

Send me notifications when other members comment.

Please create a username to comment.

SearchDisasterRecovery

SearchDataBackup

SearchConvergedInfrastructure

Close