News Stay informed about the latest enterprise technology news and product updates.

Hot data storage technology trends for 2017

Learn what's hot and what's not-quite-so-hot on our list of data storage technology trends for the forthcoming year.

This article can also be found in the Premium Editorial Download: Storage magazine: Hot data storage market technologies for 2017:

Rodney Brown, Rich Castagna, Paul Crocetti, Garry Kranz, Sonia Lelii, James Alan Miller and Carol Sliwa also contributed...

to this story.

It's our favorite season. For the 14th year running, we get to compile the data storage technology trends we believe will have the largest impact on the world of storage in the coming year. Welcome to Hot Techs 2017!

As in past years, there's nothing bleeding-edge or impractical here, only newer storage tech that's been proven practical. Hence, while our list of storage technology trends represents the best and brightest the storage industry has to offer, it only includes technologies you can buy and deploy today.

Climb aboard, fasten your seatbelts and get ready to discover our take on what technologies will have the most profound effect on storage shops in 2017.

Cloud-to-cloud backup

Just because data's in the cloud doesn't mean it's adequately protected. Cloud-to-cloud backup fills that role by allowing enterprises to copy data stored on one cloud service to another cloud. Among the storage technology trends poised to have a big year in 2017, cloud-to-cloud backup vendors continue to add capabilities as user interest in the services grows.

Storage expert Brien Posey thinks cloud-to-cloud backup will likely become the norm by 2018. Why it's becoming popular is twofold. "First, backup technology is finally starting to catch up to the public cloud, making it more practical to do cloud-to-cloud backups," Posey wrote in an email. "Second, and this is the big one, is the economic factor."

For organizations moving data to the public cloud because it's cheaper, backing up to another cloud provider makes economic sense and offers the advantages of off-site backup.

Posey still sees a role for local backups and storage, but thinks local storage requirements will likely decrease over the next few years.

"We may see cloud backup moving to be the de facto standard, with snapshots retained on-prem for user error type restores," storage expert and consultant Chris Evans wrote in an email. "Backup software vendors need [to] and have started to adapt. The biggest losers could be backup appliances in this instance."

Specifically, with private-to-public cloud backups, there are tools to back up and restore applications into the cloud, saving money and improving operations, Evans added. And, he noted, software-as-a-service (SaaS) applications and data also need backing up, which is most easily done through the cloud.

As an internet-based software delivery model, SaaS has become a major option for businesses looking to remove the overhead of providing a range of IT services (e.g., email, collaboration, CRM and so on) themselves. So, as SaaS grows and serious work moves to the cloud, more organizations are recognizing the value of cloud-to-cloud backup, said storage consultant Jim O'Reilly.

"Rather than returning data to the in-house data center, economics and operational efficiency suggest that a formal backup of cloud data into another cloud namespace is the best mechanism for totally protecting data in the cloud, whether from SaaS efforts or from owned applications," O'Reilly explained. "Increasing comfort with the cloud as a site for running serious apps and the increasing use of SaaS will make this a must-have approach for larger IT operations through 2017."

Major players in SaaS-oriented cloud-to-cloud backup include Asigra Cloud Backup, Barracuda Cloud-to-Cloud Backup, Datto Backupify and Dell EMC Spanning. Barracuda, among those that enhanced their cloud-to-cloud backup platforms in 2016, reduced how long it takes to complete incremental backups of hosted versions of Microsoft applications.

2015 hot data storage technology picks

Cloud-to-cloud backup is essential for protecting data created by SaaS applications. Notably, in May, a Salesforce outage prevented customers from accessing their data for several hours. Relatedly, although SaaS vendors perform their own backups, it's only for their protection. If, for example, one of your SaaS users accidentally deletes something, without your own backup copy, you would have to pay Salesforce to restore the data -- and the cost of that starts at $10,000.

When considering a cloud-to-cloud or SaaS backup plan, Evans advised administrators to apply the same standards to backup and restore processes as they would to on-premises deployments. You can do this by, for example, testing how well a cloud-to-cloud backup service meets recovery time and recovery point objectives.

Containers

Propelled by technology advances that allow application microservices to directly consume persistent storage, container virtualization made significant inroads in enterprise storage in 2016. We fully expect open source containerization to remain one of the hot storage technology trends in 2017 as well, as container technology has advanced in key areas such as data protection, persistent storage consumption and portability.

Development and testing remain the dominant container uses, but experts say storage admins are learning to selectively manage Docker instances also. "The biggest change, in one word, is persistence," said Henry Baltazar, a research director for storage at IT advisory firm 451 Research. "Containers have moved beyond ephemeral storage to persistent storage to hold data and protect your applications."

Although open source Docker rival CoreOS released the first commercial version of its Rocket runtime for Linux container this year, Docker remains the container kingpin -- claiming more than 5 million software downloads and 650,000 registered users. Application teams use Docker to rapidly develop, ship and spawn applications inside containers. The closer those "Dockerized" applications move to real-time deployment, the greater the need to manage and provision Docker storage in stateless containers.

The Docker runtime engine was originally geared for Linux-based storage, but with Windows Server 2016 (released in September), Microsoft now allows admins to manage Docker virtualization on Windows servers. Microsoft also added its own container runtime to its latest server OS, enabling Microsoft shops to launch Windows-based containers on Windows Server 2016 server hardware or inside Hyper-V virtual machines.

While hardware virtualization is here to stay, containerization extends the concept further by virtualizing the operating system itself, thereby allowing workloads to share the underlying code and dependent libraries. Enterprises with highly virtualized storage could therefore deploy hundreds or perhaps thousands of containers on a single node, all running as lightweight instances.

Enabling persistent storage in containers is a top priority for storage vendors, said Greg Schulz, founder and senior advisor at IT infrastructure firm Server StorageIO. "Support for stateless and stateful containers will be the norm within 18 months. Containers for Docker, Linux and Windows will become a more viable unit of compute, joining physical bare metal along with other software-defined virtual machines and cloud machine instances."

Legacy storage vendors Dell EMC, Hewlett Packard Enterprise (HPE), Hitachi Data Systems, IBM and NetApp are differentiating their storage arrays to deploy and manage Docker environments at a large scale. Portworx, Rancher Labs and StorageOS are among container software startups tackling data management and secure migration of container data between server nodes.

Red Hat added Red Hat Gluster software-defined storage as a persistent storage back end for Linux-based application containers. Even virtualization giant VMware has joined the fray. VMware Integrated Containers permit customers to run containers in vSphere.

As with any new technology, the hype cycle for containers has outpaced actual deployment. That means enterprises need to move cautiously as they connect the dots between containers and storage management, Baltazar said.

"Enterprises aren't going to start running a bunch of Oracle database apps in a container, but there are areas where containers are important," Baltazar noted. "Mobile apps and analytics are ideal for containers. You get really powerful resource allocation and the ability to do provisioning very rapidly."

High-capacity flash

Samsung introduced a 15 TB 2.5-inch SAS solid-state drive (15.36 TB actual capacity) in 2015. That drive, which started shipping last spring, is currently the largest capacity enterprise SSD available and is now beginning to show up in all-flash arrays from HPE and NetApp. Not to be outdone, Seagate unveiled a 60 TB SAS SSD drive in a 3.5 inch form factor at this year's Flash Memory Summit and is now partnering with HPE to move it into mass production.

Increasing drive capacity to previously unimagined heights is the latest of the hot storage technology trends in the flash industry. If that sounds familiar, it should. Just like the early days of hard disk drives, SSD vendors are now competing on high-density capacity levels, or who can cram the most and highest-density flash into a standard-sized drive.

Samsung bases its large drives on its 512 GB V-NAND chip. The vendor stacks 512 V-NAND chips in 16 layers to forge a TB package, 32 of which combine in 32 TB SSD. Samsung pointed out its 32 TB will enable greater density than Seagate's 60 TB SSD because 24 2.5-inch drives can fit into the same space as 12 3.5-inch SSDs. Both Samsung's 32 TB and Seagate's 60 TB SSDs will ship sometime in 2017. So it looks like Seagate will be number one in density for a while, at least until Samsung packs its higher-density flash technology into a 60 TB drive of its own.

According to Russ Fellows, senior partner and analyst at Evaluator Group, it will eventually get to the point where spinning disk becomes secondary to SSDs. "I think when density starts going [up] so fast and the dollars per gig go down, it's going to be even cheaper than fast SATA drives," he said. "Pretty soon, [SSDs] will be cheaper than spinning disk. By 2020, spinning disk probably will be dead."

For Mark Bregman, senior vice president and CTO at NetApp, high-capacity SSDs offer huge reductions in space and savings in power and cooling.

"NetApp's all-flash arrays that use high-capacity SSDs can now address customer use cases where such significant relief has historically been impractical," Bregman wrote in a post on the NetApp Community blog. "From a space efficiency standpoint, you can't beat the new high-capacity all-flash arrays, which give you up to 321.3 TB of raw storage in a single 2U form factor. That means a single 2U system using 15.3 TB drives can provide more than 1 petabyte of effective capacity."

"To achieve the same [capacity] with even the highest density SFF hard disk drives," he continued, "would require 52U of rack space and 18 times as much power."

NVMe

Solid-state drives have been the largest market for nonvolatile memory express (NVMe) specified storage to date. Latency-lowering, performance-boosting NVMe technology is one of the storage technology trends now starting to heat up in enterprise storage systems, however.

Shipments of server models as well as hybrid and all-flash storage arrays that leverage NVMe technology should more than double in 2017 due to more affordable price points and an expanding ecosystem, according to Jeff Janukowicz, a research vice president at IDC.

"NVMe adoption is still in its infancy," Janukowicz wrote in an email. "However, we are at an inflection point, and we are beginning to see many more models that are starting to become available to the broader market."

NVMe is an alternative to the age-old SCSI for transferring data between hosts and peripheral storage. SCSI became a standard in 1986 when HDDs and tape were the data center's main storage media. The industry designed NVMe to support faster storage technology such as PCI Express (PCIe) SSDs. The NVMe specification, released in 2011, provides a streamlined register interface and command set to reduce the I/O stack's CPU overhead.

Eric Burgener, a research director at IDC, singled out real-time, big-data analytics as one type of application workload that would need the level of performance that NVMe can deliver. Vendors targeting those high-performance workloads with NVMe-based storage products include Dell EMC (with DSSD all-flash storage systems), E8 Storage and Mangstor, he said.

Burgener predicted that NVMe in storage systems, also known as "rack-scale flash," would remain a relatively small but growing market over the next several years. He said the array market would grow faster once off-the-shelf NVMe devices support enterprise capabilities such as hot plug and dual port.

Also on the horizon is NVMe over Fabrics (NVMe-oF), enabling the use of alternate transports to PCIe to extend the distance over which NVMe hosts and NVMe storage devices can connect. NVM Express Inc., a nonprofit organization of more than 100 vendors, finalized the NVMe-oF specification in June 2016.

The long-term growth potential for NVMe is significant. Market research firm G2M Inc. forecasted the NVMe market would hit $57 billion by 2020, with a 95% compounded annual growth rate.

2016 not-yet-hot data storage technology picks

G2M also predicted that 60% of enterprise storage appliances and more than 50% of enterprise servers would have NVMe bays by the end of the decade. And it projected that, by 2020, nearly 40% of all-flash arrays would be NVMe-based with shipments of NVMe-based SSDs growing to 25 million units.

Software-defined storage

There are many loosely defined terms used in the technology world in general, and for storage in particular. The least clear storage term may very well be one of the terms most bandied about nowadays, software-defined storage (SDS).

WhatIs.com defined SDS as "an approach to data storage in which the programming that controls storage-related tasks is decoupled from the physical storage hardware." It's a definition that's flexible enough in interpretation to cover all sorts of technologies, however. The thing is those technologies actually share some features -- mainly a focus on storage services rather than hardware and the use of policy-based management to allow for increased efficiency and reduced complexity when it comes to managing storage.

One of the main areas of confusion comes from how SDS is most often used in relation to virtualized environments. That isn't a requirement, though. Thankfully, the market is slowly coming around to an agreement on exactly what SDS is. In order to be called software-defined storage, a product has to allow users to allocate and share storage resources across any workload, even if the storage isn't virtualized.

In the March 2016 issue of Storage magazine, storage analyst Marc Staimer of Dragon Slayer Consulting established four basic categories of SDS: hypervisor-based, hyper-converged infrastructure (HCI), storage virtualization, and scale-out object or file.

VMware practically owns the hypervisor-based category with vSphere Virtual SAN. As the second-oldest SDS category with products on the market, it is well-established among storage technology trends. However, it is also restricted to hardware that VMware has determined compatible.

The bulk of the SDS market resides with HCI SDS, through products offered by giants like Cisco, Dell EMC and IBM, as well as startups like Nutanix and SimpliVity. The positive aspect of HCI SDS is how everything you need for your storage infrastructure is included and designed to work together. The negative is that means only resources in the HCI get to take advantage of the benefits of SDS.

Storage virtualization SDS is the grandfather of all varieties of software-defined storage, and DataCore, Dell EMC, IBM and even Microsoft offer products in this category. But just because it is the oldest doesn't mean there are no younger players (including NetApp, Nexenta Systems and StarWind Software) in the game.

Newest to market is scale-out object or file SDS. Even in this area, in addition to newer companies like Scality, you have giants competing, like IBM's Spectrum Storage and Red Hat with its OpenStack and Ceph-based products.

The continuing drop in price for commodity hardware is spurring greater adoption of SDS. With SDS, there's less of a need for specific hardware to get high levels of performance, particularly when a performance-focused type of software-defined storage is used in the enterprise. In addition, because object storage has become less of a cloud-only platform for storing data, that particular version of SDS has quickly gained ground in the data center.

In a recent interview, CEO and founder of SDS startup Hedvig Inc., Avinash Lakshman, explained why he thinks scale-out SDS is a hot technology that will continue to grow rapidly.

"The ROI is pretty simple because hardware costs are going nowhere but down. People like Amazon, Google and all these large internet-scale companies are obviously going that route. It's forced the enterprise to take a look at them and ask the question, 'If they can do a lot more with a lot less, why can't we?'"

32-gig FC

Most of today's storage technology trends work against Fibre Channel (FC). Hot new architectures such as hyper-convergence and the cloud use Ethernet with little need for FC. Ethernet also dominates storage for file-based unstructured data, which is growing much faster than the block-based structured data that often requires Fibre Channel SANs. There are barely a handful of FC networking companies left, and they all now support Ethernet as well.

On the flipside, there's flash. FC vendors and fans are counting on the rapid emergence of all-flash storage SANs to keep FC relevant, especially while the protocol transitions from 16 Gbps equipment to 32 Gbps switching and adapters. That transformation will likely make big inroads in late 2017, providing a new wave for FC to ride.

"Storage performance bottlenecks are moving out of arrays and into the storage network, so Fibre Channel will remain the data center storage protocol of choice for the next decade," Gartner research director Valdis Filks and research vice president Stanley Zaffos wrote in a recent report called "The Future of Storage Protocols."

Solid state storage -- today mostly flash with NVMe arriving and 3D XPoint on the horizon -- provides greater throughput and lower latency than hard disk media. Storage networks need more bandwidth if storage media is to reach those performance peaks, though.

Filks and Zaffos added that 16 Gbps -- and even 40-gigabit Ethernet -- will be too slow to keep up with the next generation of solid-state storage. So they recommend moving to 32-gig FC and 100-gig Ethernet within five years.

Last year, we saw early 32-gig FC products hit the market, including switches from Brocade and Cisco and adapters from Broadcom and QLogic. Adoption is expected to pick up when storage array vendors support 32-gig, which should happen in a meaningful way next year.

Broadcom further consolidated the FC networking industry with a $5.9-billion acquisition of Brocade that is expected to close in early 2017.

Broadcom CEO Hock Tan said his company will continue to invest in FC. "If you believe in all-flash, you have to believe in Fibre Channel," he said on a conference call for the Brocade acquisition. "Even today, iSCSI and Ethernet does not offer that. We expect this market to remain relatively stable as it supports private data centers with a large installed base of Fibre Channel SANs that are constantly upgraded."

Flash and the move from 16 Gbps to 32 Gbps are expected to drive many of those upgrades.

Gartner recommended waiting a year after general availability before moving to the latest protocol. That would provide time for prices to come down, early kinks to be worked out, and allow for full compatibility of storage and servers with switching. Early 32-gig FC buyers can still use their 16-gig and 8-gig storage and servers with the new switching products, however. They also can check out roadmaps to 64-gig and 128-gig FC equipment.

Adarsh Viswanathan, senior manager of product management for Cisco's storage group, expects 32-gig FC to take off late in 2017, after storage array vendors fully embrace it. "A lot of the big customers we talk to are in production with all-flash arrays and attaching mission-critical workloads on flash through Fibre Channel. Flash vendors can fill the 32-gig pipe. We expect it will get big traction in the second half of 2017."

Next Steps

A half-dozen storage techs impacting data centers

Data storage technology that's here today…and tomorrow?

Keep your eye on data storage

This was last published in December 2016

Dig Deeper on Data storage strategy

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

2 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What storage tech trends do you think will emerge in 2017?
Cancel
Decentralised, encrypted, token based cloud storages.
Sia, Storj, MadeSafe & Ethereum based platforms.
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close