BACKGROUND IMAGE: stock.adobe.com
Pure Storage AIRI has expanded to five models, with three new hyperscale AIRI platforms designed to handle AI projects that encompass up to hundreds of terabytes of storage and petaflops of compute.
Pure Storage said the new AIRI systems provide its multitasking FlashBlade system with Nvidia supercomputers, connected by Mellanox Ethernet or InfiniBand switches.
A set of 1U network load balancers in Hyperscale AIRI presents multiple physical FlashBlades as a contiguous logical domain.
Pure also teamed with Cisco to launch FlashStack for AI, a rack-scale system that uses Nvidia Tesla V100 eight-way GPUs and NVLink interconnect with Cisco Unified Computing System C480 ML servers and Cisco Nexus switches.
The moves are the latest by Pure and other flash array vendors to juke storage gear for increased enterprise interest in AI and deep learning. Hyperscale AIRI follows on the heels of Pure AIRI Mini, a seven-blade building block Pure unveiled last year.
"We want data scientists to focus on the AI research and not on infrastructure, so we built these three new systems [to] have the right balance of compute, network and storage," said Brian Schwarz, a vice president of product management at Pure Storage, based in Mountain View, Calif.
Scalable AI storage
Enterprises buy Pure Storage AIRI as a preconfigured hardware stack. Software for AIRI includes the Nvidia Cloud Deep Learning stack, a scaling toolkit and a container registry of selected machine learning frameworks.
NVMe flash enables AI to meet its potential by drastically reducing storage overhead. Using a PCIe link, the NVMe spec enables storage to communicate directly with a host and avoid network hops.
Hyperscale AIRI includes two Nvidia DGX-2 models and one based on DGX-1 servers. The smallest is a single chassis that accommodates 15 of Pure's custom 17 TB NAND flash blades. Two larger systems are configured with 30 FlashBlade devices, but customers can add up to three chassis and get 45 additional blades.
Chirag Dekate, a Gartner research director for servers and storage, said the larger Pure Storage AIRI is a "performance-centric" approach to boost performance, scalability and throughput for AI and machine learning.
"Pure also is creating scale-out capabilities to go along with that performance. What Pure has done is to enable users to see storage as a unified high-performance layer for their workflows. That is quite unique and interesting," Dekate said.
Pure Storage FlashStack for AI is an outgrowth of the vendor's long-standing partnership with Cisco. It enables Cisco customers to build data pipelines with Pure Storage flash and Nvidia, using their existing servers.
"FlashStack for AI is a really good fit with FlashBlade, which is designed for high concurrency. We want lots of different clients and applications hitting FlashBlade at the same time. That's how you get the most out of the architecture," Schwarz said.