michelangelus - Fotolia

IBM stretches its Elastic Storage line to speed AI, big data

New IBM storage will help to optimize AI software used to analyze data and inference results. The disk-based Elastic Storage System 5000 scales to yottabyte capacities.

IBM is expected to unveil this week several new and updated storage offerings to help large businesses build infrastructure that supports AI-optimized software to analyze and organize data.

The centerpiece is the new IBM Elastic Storage System (ESS) 5000 data lake capable of performing up to 55 GBps in a single eight-disk enclosure node. IBM said ESS 5000 handles up to 8-yottabyte configurations. The new IBM storage system is particularly suited for data collection and longer-term storage capability, according to IBM product documents viewed by SearchStorage.

The forthcoming products underscore the growing use of object storage as a target for AI and high-density analytics. IBM also built an enhanced version of its IBM Elastic Storage System 3000 that allows access and data movement between IBM Spectrum Scale and object storage, both on premises and in the cloud. The disk-based system adds Data Acceleration for AI feature in IBM Spectrum Scale software. IBM claims the feature serves to lower costs by eliminating the need for an extra cloud copy of the data. It moves data using automatic or controlled acceleration of lower-cost object storage.

IBM's AI and analytics offerings form a cornerstone of its overall corporate strategy, said Steve McDowell, a senior analyst at Moor Insights & Strategy, based in Austin, Texas.

"The ESS 5000 is a product that is only designed to solve big data problems for big data customers," McDowell said. "There are only a handful of IT shops in the world today who need the combination of 55 GBps performance that is scalable to yottabyte capacities. Those that do need it are almost all IBM customers."

IBM Elastic Storage: Mainframe to cloud

IBM ESS 5000 will compete with the Dell EMC Isilon A2000 and Net App FAS 6000 big data storage systems.

The underpinnings of the 2U ESS 3000 building block stem from IBM's long expertise in mainframes and traditional high-end storage, McDowell said. ESS 3000 systems are based on the IBM FlashSystem NVMe flash platform.

There are only a handful of IT shops in the world today who need the combination of 55 GBps performance that is scalable to yottabyte capacities. Those that do need it are almost all IBM customers.
Steve McDowellSenior analyst, Moor Insights & Strategy

"The ESS 3000 building block addresses the kind of performance required for enterprise AI workloads, and the data lakes that emerge around those workloads, for organizations building out those capabilities."

IBM also upgraded it Cloud Object Storage system, adding support for shingled magnetic recording hard disk drives, expanding its capacity to 1.9 petabytes (PB) in a 4u enclosure.

Aside from the storage hardware, McDowell said Spectrum Scale enhancements include some compelling features, including the new Data Acceleration for AI to help balance data between storage tiers.

"One of the biggest challenges of hybrid cloud is keeping data where you need it, when you need it. It's also a costly challenge, as the egress charges encountered when moving data out of public cloud can become very expensive," McDowell said.

Greater flexibility to move data across different storage tiers should appeal to corporate IT shops that need to keep sensitive data on premises or perhaps in a hybrid cloud.

"No one wants to leave all their sensitive or strategic data in the cloud," said one analyst familiar with the company's plans. "If you are coming up with the next vaccine for the coronavirus that could end up being worth $3 billion, you are not going to put that up in anyone's public cloud. Especially massive data sets that can be hard to manage across multiple environments."

IBM has supported data movement to other vendors' storage for years, and recently added support for Dell EMC PowerScale and NetApp filers to its IBM Spectrum Discover metadata management software.

The AI software IBM has added makes it easier to locate and manage data spread across multiple vendors' clouds, and makes a difference in the way large enterprises build object storage and discover information, one analyst said.

IBM also upgraded its Spectrum Discover Policy Engine to optimize data migration to less expensive archive tiers.

IBM enhances Red Hat storage

Along with IBM Elastic Storage hardware, IBM also debuted the Storage Suite for IBM Cloud Paks, which combines OpenSource Red Hat Storage with IBM Spectrum Storage Software.

Red Hat is a key part of IBM's cloud strategy. IBM acquired Red Hat in a $34 billion deal in 2019, vowing to run it as an independent engineering arm.

Offerings in the new bundle include Red Hat Ceph, OpenShift Container Platform, IBM Spectrum Virtualize, IBM Spectrum Scale, IBM Cloud Object Storage and IBM Discover.

IBM claims Spectrum Discover can search billions of files or objects in 0.5 seconds and automatically deploy that data on Red Hat OpenShift. The product is intended to improve users' insights into data to eliminate rescanning. A storage data catalog can be integrated with the IBM Cloud Pak for Data with one click.

Some of the AI-driven capabilities built into the new or enhanced offerings ease installation and maintenance. Integration with existing infrastructure will be a factor in convincing users to adopt the products in what figures to be challenging economic times this year and likely into next.

"Adoptability will be key with this," said another analyst familiar with the company's plans. "But the Fortune 50 to Fortune 100 companies are watching pennies these days and could be reluctant to spend money until they have a better idea of what the returns are going to be. With this virus, no one knows what they will need, or need over the long term."

Dig Deeper on Storage management and analytics

Disaster Recovery
Data Backup
Data Center
Sustainability
and ESG
Close