Scanrail - Fotolia

Enterprise storage market to focus on data use in 2018

The focus of enterprise storage will shift to data management and analysis in 2018, spurring IT departments to rethink architecture that must stretch to edge and cloud.

Experts predict a shift in focus for the enterprise storage market in 2018 as IT departments turn their attention to managing and analyzing their explosively growing data.

Storage executives, technologists and analysts said data from industrial internet of things (IoT) applications, automobile systems, video surveillance and other programs with limited connectivity will drive a rethinking of storage and data management architectures that extend from the core to edge devices and the cloud.

Other predictions for the enterprise storage market in 2018 include increased use of artificial intelligence and machine learning in storage systems, container-based virtualization with persistent storage, and more scale-out data protection and consolidated backup appliance options.

Enterprise storage market: Fewer companies, more focus on data

Mark Bregman, CTO, NetApp: Storage is important, but it's not the center of the conversation. Increasingly, the focus is shifting to data. Over the next several years, we'll see the emergence of what I call self-aware data -- data together with its very rich metadata -- which begins to control the processing, as opposed to today, where we have the processing really control the data.

Mark Bregman, CTO, NetAppMark Bregman

We're going to see the emergence of blockchain-like technology used more and more to help manage data as it moves around the different storage domains. The reason for that is that people are increasingly concerned about data providence, data governance, access control, audit trail. All of those things can be managed much more easily and not require a trusted central authority if we figure out how to adapt blockchain for that purpose. We'll see a lot of discussion in that area in 2018.

Jeff Kato, senior analyst, Taneja Group: There will be fewer storage-only companies at the end of 2018 than at the start. They will either be bought or disappear. There are multiple companies with the same architecture that are on the outside looking in, and if they haven't already been bought, they're not going to hit critical mass.

Also, Cisco will purchase one to two storage companies in 2018 because the rest of their IT competitors all have significant storage offerings. They need a full on-premises infrastructure portfolio to compete, because customers are looking for simplification.

Marc Staimer, president, Dragon Slayer Consulting: The hottest storage technology this year will be unstructured data management, also known as content data management or cognitive data management. It used to be known as copy data management. It's the management of the data to the storage, and the movement of the data back and forth between storage. It also covers data protection, archiving and a variety of things. The biggest problem in the data center isn't high-performance data. It's unstructured data. How are you going to manage petabytes and petabytes of unstructured data? It's growing at three times the rate of structured data.

More data on the edge

Bregman, NetApp: During 2018, we'll see an increasing number of enterprises rethinking their data and storage management architecture to span uniformly from the edge to the cloud. That's opposed to today, where they're typically siloed between three different regions -- edge, core and cloud. It's being driven by two factors that are closely related. More and more of the data that's relevant to the business is being generated at the edge, and the connectivity at the edge isn't adequate to bring all that data back into the core. In the past, people would fret and think, 'Oh my God, we're going to have to find bigger bandwidth.' But increasingly, we're realizing that it's just a new architecture where there's going to be much more processing of the data at the edge, and then a subset of that data will get shipped back to the core or even to the cloud where analytics or archiving may take place.

Phil Bullinger, senior vice president and general manager of data center systems, Western DigitalPhil Bullinger

Phil Bullinger, senior vice president and general manager of data center systems, Western Digital: Data architectures are moving to edge-core and away from datacenter-centric architectures, driving increased focus on specialized storage solutions at both edge and core. In 2018, fast data will be fueled by the rapid rise in real-time data sources at the edge, requiring storage solutions that can capture and deliver access to time-sensitive data sets to yield actionable insights. Organizations will also be collecting increasing amounts of long-term data for machine learning and AI-based analytics, storing these at the core in durable architectures that support fast access through a multitude of applications. Scalable, multi-petabyte hybrid-cloud data lakes backed by object storage will house this big data.

Hu Yoshida, CTO, Hitachi VantaraHu Yoshida

Hu Yoshida, CTO, Hitachi Vantara: The explosion of cloud and IoT data will drive acceptance of lower cost storage nodes that are simpler to manage, more reliable and much denser. And we must be more selective in what we decide to store. Storage systems will become part of a storage ecosystem with prescriptive analytics, data engineering, cognitive technologies, machine learning and integration of new data streams. The storage itself could be a simple node in this ecosystem, with webscale interfaces, erasure coding and lots of high-density storage. The other functions that we normally associate with enterprise storage will be handled by other nodes or by microservices in the storage ecosystem.

AI/machine learning changes game for data

Milan Shetti, GM of storage and big data, HPEMilan Shetti

Milan Shetti, GM of storage and big data, HPE: Machine learning/artificial intelligence is poised to unleash the next wave of disruption in the data center. There are three areas where storage stands to benefit by integrating machine learning-based predictive analytics. First, machine and deep learning techniques will solve cross-stack issues to deliver a radically different support experience that was unimaginable only a few years back. Second, there will be no more manual fine-tuning of storage arrays to provide best-in-class performance for workloads. Big data analytics will automate 90% of performance fine-tuning, freeing up time and resources. Third, over the last decade, public cloud vendors have enjoyed the benefits or resource optimization. With the help of heuristics driven by global learning, enterprises will start to achieve real impact toward optimizing their existing infrastructure resources.

Danny Cobb, corporate fellow; vice president of global technology strategy, Dell Technologies Danny Cobb

Danny Cobb, corporate fellow; vice president of global technology strategy, Dell Technologies: We will see more use of artificial intelligence and training to optimize data placement in data center storage, to optimize tiering implementations, for example, in a hybrid platform, to optimize and balance some of the internals of storage arrays for flash wear-out and things like that, and for proactive customer service. And as we go further into the year, that'll start to translate into some real projects that start to extract more value from the infrastructure and deliver more on digital transformation.

Ritu Jyoti, research director, IDC: About 8% to 10% of enterprise storage systems will employ some form of artificial intelligence -- such as machine learning algorithms to support self-healing/self-configurable storage -- to improve enterprise productivity, manage risks and drive overall cost reduction.

Container use with persistent storage

Yoshida, Hitachi Vantara: Container-based virtualization with persistent storage will gain wide acceptance in 2018. Up to now the only drawback to containers has been the lack of persistent storage. Traditional storage can be exposed to a container or group of containers from an external mount point over the network, like SAN or NAS, using standard interfaces. However, they may not have external APIs that can be leveraged by an orchestrator like Docker Swarm or Kubernetes for additional services, such as dynamic provisioning. These would have to be done manually, if APIs are available. Orchestrators like Docker Swarm, Kubernetes and Mesosphere manage application containers across clusters. Through storage plug-ins, these orchestrators can now help orchestrate the storage in containers. Storage clusters can provide persistent storage with high availability and enterprise storage services that can reside outside or inside the container.

Robert Haas, CTO of storage, Europe, IBM ResearchRobert Haas

Robert Haas, CTO of storage, Europe, IBM Research: We expect ubiquitous use of containerized environments, such as Kubernetes and Docker, for most of the traditional and new types of applications, which use unstructured databases such as NoSQL, because now we have the capability to seamlessly integrate persistent storage into these containerized environments. The work started over a year ago to integrate storage so that it can be provisioned in a completely automated fashion from the containerized environments. We will see more and more support for extending the capabilities of the APIs so that they can [provide] additional services you expect from an enterprise storage system, such as encryption or quality of service.

Jyoti, IDC: In 2018, enterprises will move beyond early phases of evaluation/deployment of containers for stateful applications. About 8% to 10% of container instances will be stateful, deploying both legacy and cloud-native microservices-based applications. This will fuel growth of enterprise-grade, container-native persistent storage and drive standardization of container schedulers orchestration frameworks along with enhanced security and quality of service (QoS) support at the container level.

Primary/secondary storage consolidation

Randy Kerns, senior strategist and analyst, Evaluator Group: Vendors like Rubrik and Cohesity offer scale-out data protection, but you'll probably see other large, more well-known vendors with offerings coming out this year. Those scale-out data protection systems really are scale-out NAS systems. You're putting the software effectively on each node that can serve as what we would have thought of previously as a media server or a data mover function. That gives you a high degree of parallelism, where you can scale capacity and the data movement functions in parallel. There will be an opportunity to address some big customer concerns that, as capacity grows, they can't afford the capacity-based licensing they're charged from some of the backup software. But it is a different approach, with different software running for the most part. It is a heavy lift for many customers already invested in their current data protection environments. As more vendors enter this space, there will be more capabilities to merge with their current environment so it'll be less of a disruption.

Mohit Aron, CEO and founder, CohesityMohit Aron

Mohit Aron, CEO and founder, Cohesity: Consolidated backup appliances will become mainstream. The traditional practice of putting together enterprise backup solutions by using multiple siloed products from potentially multiple vendors will be frowned upon. Cohesity started this trend, but now incumbents such as Commvault and Dell EMC are responding with similar approaches.

Brian Biles, CEO, DatriumBrian Biles

Brian Biles, CEO, Datrium: 2018 will be a breakout year for the consolidation of primary and backup data. Many companies have attempted to combine primary and backup storage into a single cost-optimized system. To date, none have succeeded, but leading hyper-converged infrastructure providers are still trying.

Dig Deeper on Storage architecture and strategy

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close