3dmentat - Fotolia

News Stay informed about the latest enterprise technology news and product updates.

Lustre-based DDN ExaScaler arrays receive NVMe flash

High-performance computing specialist DataDirect Networks spruced up its ExaScaler SFA-based flagship with nonvolatile memory express flash, and it bought Lustre from Intel.

DataDirect Networks has refreshed its Storage Fusion Architecture-based ExaScaler arrays, adding two models designed with nonvolatile memory express flash and a hybrid system with disk and flash.

In a related move, the high-performance computing storage vendor acquired the code repository and support contracts of Intel's open source Lustre parallel file system for an undisclosed sum. The Lustre file system is the foundation for DDN ExaScaler and GridScaler arrays.

The fourth version of DDN ExaScaler combines parallel file storage servers and Nvidia DGX-1 high-performance GPUs with Storage Fusion Architecture (SFA) OS software. SFA 200NV and SFA 400NV are 2U arrays, with slots for 24 dual-ported nonvolatile memory express (NVMe) SSDs. The difference between the two is in compute power: SFA 200NV has a single CPU per controller, while the SFA 400NV has two CPUs per controller.

The arrays embed a 192-lane PCIe Gen 3 fabric to maximize NVMe performance. DDN claims the dense ExaScaler flash ingests data at nearly 40 GBps.

DDN also introduced the SFA7990 hybrid system, which allows customers to fill 90 drive slots with enterprise-grade SSDs and HDDs.

DataDirect Networks SFA 200NV
DataDirect Networks SFA 200NV is an NVMe flash system with parallel file system software for high-performance computing.

AI and analytics performance driver

Adding NVMe is a natural fit for DDN, which provides scalable storage systems to hyperscale data centers that require lots of high-performance storage, said Tim Stammers, a storage analyst at 451 Research.

"NVMe is going to help drive performance on intensive applications, like AI and analytics. It makes storage faster, and in return, AI and analytics will drive the takeup of NVMe flash," Stammers said.

Data centers have the option to buy DDN ExaScaler NVMe arrays as plug-and-play storage for AI projects. The DDN AI200 and AI400 provide as much as 360 TB of dual-ported NVMe storage in 2U. The 4U AI7990 configurations scale to 5.4 PB in 20U.

The AI turnkey appliances include performance-tested implementations of Caffe, CNTK, Horovod, PyTorch, TensorFlow and other established AI frameworks.

Customers can combine an SFA cluster with DDN's NVMe-based storage. Lustre presents file storage as a mountable capacity pool of flash and disk sharing a single namespace.

The DDN ExaScaler upgrade provides dense storage in a compact form factor to keep acquisition within reach of most enterprises, said James Coomer, vice president for product management at DDN, based in Chatsworth, Calif.

"At this early stage, customers don't necessarily know where they're going with AI," Coomer said. "They may need more flash for performance. For AI, they need an economical way to hold data that's relatively cold. We give them a choice to expand either the hot flash area or augment it in the second stage with hard-drive tiers and anywhere in between."

Recent AI enhancements to the SFA operating system include declustered RAID and NVMe tuning. Declustered RAID allows for faster drive rebuilds by sharing parity bits across pooled drives.

Inference and training investments planned

DDN's Lustre acquisition includes the open source code repository, file-tracking system and existing support contracts from Intel. Coomer said DDN plans to make investments to enable Lustre to support inference and training of data for AI workloads. The open source code will remain available for contributions from the community.

DDN is a prominent contributor to Lustre code development, and it has shipped Lustre-based storage systems for nearly two decades.

"DDN says they're going to make Lustre easier to use," Stammers said. "What they're banking on is that it will lead more enterprises to use Lustre for these emerging workloads."

Dig Deeper on AI storage

Join the conversation

1 comment

Send me notifications when other members comment.

Please create a username to comment.

How much storage budget will you allocate to AI?
Cancel

-ADS BY GOOGLE

SearchDisasterRecovery

SearchDataBackup

Close