News Stay informed about the latest enterprise technology news and product updates.

Hot data storage technology trends for 2016

Find out what's hot and what's not-so-hot on our list of data storage technology trends for the coming year.

This article can also be found in the Premium Editorial Download: Storage magazine: Hot data storage market technology trends for 2016:

Here we go again -- our list of Hot Techs for 2016! For the past 13 years, we've honored the best and brightest...

technologies of the upcoming year. And, as always, we are proud to present a batch of technologies we believe will make a big impact.

As in years past, our list leans toward practicality -- most of our hot techs are "newish" rather than futuristic, because we want to focus on the techs that are mature enough that we know they are proven and generally available.

Buckle up and get a ready for a ride through our picks for what's hot in storage for 2016.

Copy data management

Managing numerous physical copies of the same data from multiple tools remains expensive, continues to be a management headache and even poses a security threat. That's why copy data management (CDM), which uses a single live clone for backup, archiving, replication and other data services, is one of the storage technology trends poised for stronger adoption in 2016.

The market has grown to include startups Cohesity Inc. and Rubrik, which have recently unveiled products, along with traditional vendors such as Catalogic Software, Commvault, Hitachi Data Systems and NetApp. Research firm IDC estimates copy data will cost IT organizations nearly $51 billion by 2018.

Actifio is the pioneer in this space with its copy data virtualization platform that decouples data from infrastructure and consolidates siloed data protection processes.

Cohesity launched its Cohesity Data Platform designed to converge all secondary storage workloads with an Intel-based 2U appliance that serves as a building block for its scale-out architecture. Its Cohesity Open Architecture for Scalable, Intelligent Storage (OASIS) software includes quality of service management to converge analytics, archiving and data protection on a single platform.

Rubrik came out with its data management product in 2015, selling a 2U appliance with built-in software that performs backup, deduplication, compression and version management. Hitachi leverages its Hitachi Data Instance Director (HDID) and the Hitachi Virtual Storage Platform to help reduce copies.

CDM greatly differs from traditional storage management because it streamlines a silo process in which customers use multiple tools from multiple vendors, particularly for data protection.

"Today, there is a bunch of fragmentation in secondary storage," said Cohesity CEO Mohit Aron. "A customer goes and buys a bunch of different products from multiple vendors and somehow has to interface them together manually, managing them through multiple UIs. That becomes a major manageability headache."

Aron acknowledged the evolution of CDM products with varying capabilities.

"Cohesity Data Platform converges all your data protection workflows on one appliance," Aron said. "We have a single pane of glass that can be used to manage all these workloads. The analogy I use is that our infrastructure is similar to what Apple did with the iPhone. We are building the infrastructure and the platform that can deploy some native applications to solve these customer use cases. In the future, we want to expand and have other vendors and even third parties write software on our platform."

"There are three kinds of companies who say they do copy data management," said Ash Ashutosh, founder and CEO of Actifio. "First, there are backup guys. They take snapshot management and put lipstick on it and call it copy data management. Then you have guys who say ‘if you have 14 storage devices, buy ours as the fifteenth.' What we do is different. We're completely independent of infrastructure. We want to manage data from the time it's created across its entire lifecycle. We provide instant access, and manage data to scale, regardless of where it is."

The goal for all these products is to maintain a balance between safe and accessible data by reigning in the amount of rogue copies of sensitive data created via conventional data protection platforms.

Report card: Grading our hot tech picks from last year

Erasure coding

Growing adoption of object storage, cloud-based backup storage and the emergence of high-capacity hard disk drives (HDDs) have turned up the temperature of erasure coding over the past few years, and it is projected to be one of the hot storage technology trends in 2016. Petabyte- and exabyte-scale data sets make use of RAID untenable, said George Crump, president of IT analyst firm Storage Switzerland.

"As we move into (using) 6 TB and 8 TB drives, erasure coding is the only technology that can provide data protection feasible for larger volumes of data. If you put high-capacity drives in an array, you're looking at weeks of recovery with RAID. With erasure coding, you're looking at hours," Crump said.

Erasure coding uses a mathematical formula to break data into multiple fragments, and then places each fragment in a different location within a storage array. Redundant data components are added during the process, and a subset of the components is used to reproduce original data should it get corrupted or lost.

The goal of erasure coding is to enable faster drive rebuilds. The process of copying data and scattering it across multiple drives is similar to RAID. However, erasure coding differs from RAID in scale and data longevity. If data gets corrupted or lost, only some of the "erased" fragments are needed to reconstruct the drive. The technique also preserves data integrity by tolerating multiple drive failures without performance degradation.

Today, the use of erasure coding is considered table stakes for object storage providers, including leading vendors such as Amplidata (acquired by HGST), Caringo, IBM Cleversafe and Scality. But, block and file storage vendors are getting in on the action as well. Hyper-converged array vendor Nutanix in July integrated proprietary EC-X erasure coding in a version upgrade to its Nutanix Operating System. Scale-up vendor Nexenta Systems added support for block and object storage in a version upgrade to its NexentaEdge software in May.

Erasure coding is the core data protection mechanism for cloud-based object storage, due to scalability of protecting vast amounts of data. Thus far, users are moving data to the cloud mostly for specific use cases such as backup and active archiving, a trend that is expected to continuously rise.

"Erasure coding is the type of design that's ideal for an object storage system: a scale-out, multi-node storage infrastructure. It is a way to provide RAID-like protection across nodes, instead of contained within a single storage system," Crump said.

Next-generation storage networking

Flash and virtualization are key drivers fueling the rise of next-generation storage networking as a storage technology trend, whether its Fibre Channel (FC), Ethernet or InfiniBand.

Shipments of 16 Gigabit per second (Gbps) FC switches and adapters should remain hot next year, while 32 Gbps gear starts to warm up. Brocade and Cisco will focus their roadmaps on 32 Gig switches. QLogic got the ball rolling this fall with 16 Gbps/Gen 5 FC adapters that customers can upgrade to 32 Gbps/Gen 6 in 2016.

Vikram Karvat, vice president of products, marketing and planning at QLogic, said flash storage vendors were "banging down the door" for 16 Gbps quad-port FC adapters, capable of delivering 16 lanes of PCI Express 3.0, to address the demands of virtualization, analytics and transaction-heavy workloads.

"This level of performance isn't for everybody, but when you need it, you need it," said Karvat. "Ethernet is very good at certain things. I haven't got a bias one way or the other. But, there are certain workloads that Fibre Channel has been tuned for. It just works."

Casey Quillin, director of SAN, network security and data center appliance market research at Dell'Oro Group, said 16 Gbps FC has largely been a switch story to date because there weren't many 16 Gbps ports on servers or storage arrays. He expects 16 Gbps FC adapters to play "catch up" next year and reach nearly 50% of total FC port shipments by the end of 2016.

Quillin said Brocade is working with FC adapter companies to "make sure the ecosystem is better rounded out" with 32 Gbps than it was with 16 Gbps. But, he still expects the ramp to 32 Gbps to be slower than the migration to 16 Gbps.

Heating up, but not quite hot, techs

The main trend in Ethernet-based storage networking will be 25 Gigabit switch and adapter chips with ports that enable companies to use the same class of cables they deployed with 10 Gigabit Ethernet (10 GbE). The original Ethernet roadmap called for a jump from 10 GbE to 40 GbE, but 40 GbE technology required a upgrade to thicker, more expensive cables.

Networking vendors rallied around standards for new single-pin 25 GbE switch and adapter chips in response to the needs of hyperscale cloud service providers. The ports on the new 25 GbE chips use the same number of pins and lanes on the server PCIe bus as 10 GbE ports do. The roadmap extends to 50 GbE and 100 GbE, with the latter using four lanes of 25 GbE.

"The big advantage of 25 (GbE) to 50 (GbE) is you don't have to replace what you've got to get to 100. It's a much simpler progression of getting higher performance without adding a lot of cost. That's why it's going to take off," said Marc Staimer, president of Dragon Slayer Consulting. "The next gen is going to be 25 (GbE) to 50 (GbE); 40 Gig's going to end up dying on the vine."

Networking options are already available for both speeds. Dan Conde, an enterprise networking analyst at Enterprise Strategy Group, said users are deciding whether to go to 25 GbE or 40 GbE based on vendor support and cost savings.

Meanwhile, InfiniBand continues to focus on high-performance computing (HPC). The current dominant speed is 56 Gbps, but the transition to 100 Gbps should heat up in 2016 fueled by HPC, big data and Web 2.0 applications, according to Kevin Deierling, vice president of marketing at Mellanox Technologies.

Sergis Mushell, a research director at Gartner Inc., said flash will give users reason to upgrade to next-generation storage networking. "Because flash is going to drive higher IOPS, bandwidth and latency are becoming more and more important. If you really want to get the value out of the flash, you need lower latency and higher bandwidth," he said.

Yet, more than higher bandwidth, the most prominent storage networking trend in 2016 could be the emergence of products supporting non-volatile memory express (NVMe) over FC, Ethernet or InfiniBand fabrics, according to Mushell. He said the lighter NVMe protocol layer reduces the command set to address the array and improves performance.

Deierling said the ever-increasing amount of data that must be available in real time will start to drive software-defined flash storage utilizing remote direct memory access (RDMA). He said flash storage needs fast RDMA-capable interconnects, where the higher-speed networking comes into play.

Object storage

We first pronounced object storage a hot technology in 2012, and it's even hotter now. With more complete offerings from vendors and concrete use cases defined, the technology is poised to make a bigger splash among storage technology trends in 2016.

Unlike file systems, object storage systems store data in a flat namespace with unique identifiers that allow data to be retrieved without a server knowing where that data is located. The flat namespace also allows a far greater amount of metadata to be stored than can be stored on a typical file system, making tasks like automation and management simpler for the administrator. These days, the technology is being used for long-term data retention, backup and file sharing.

Until recently, object storage system options were limited -- most were systems that used a REST-based protocol on proprietary hardware. "Now, object vendors are packaging systems in such a way that traditional IT can take advantage of them," said Crump. "They're providing more protocol access like NFS, CIFS and iSCSI, and they're also providing more cost-effective back ends."

Some of today's vendors are focusing more on the software so that users can select their own hardware for a lower cost and easier integration into the main data center. Object storage software vendor Caringo, for example, in September launched FileFly software, which allows users to move their data between object storage and file systems.

"Broad adoption has to be in the legacy data center, and the legacy data center is seeing what cloud providers are doing and adopting that capability into that use case," Crump said.

This is also demonstrated by HGST's March acquisition of object vendor Amplidata, and IBM's October acquisition of Cleversafe -- signs that legacy vendors realize how important object technology is for backup and archiving strategies.

One of the main drawbacks to object technology is latency introduced due to the amount of metadata. But the most obvious use cases are ones where performance is not a primary concern. In-house file sync and share, for example, is becoming more popular as a means to reduce shadow IT and increase business productivity.

We also saw an increased interest in big data lakes over the past year. The addition of multi-protocol support from many vendors means object storage is now extremely suitable for housing this data because of its low-cost, scalable nature.

"The biggest problem that was holding it back was nobody was going to buy object storage just because it was object storage. It had to solve a problem and now we've better identified what those problems are," Crump said.

Software-defined storage appliances

After two years of non-stop talk about software-defined storage, vendors are realizing even the best storage software still requires good hardware to work.

The pendulum began swinging back to hardware in 2015. We saw startup Savage IO release a hardware array built to run somebody else's storage software. Software-defined storage products such as EMC's ScaleIO and Cloudian HyperStore came out on appliances. Dell came out with its Blue Thunder project that makes its hardware available for other vendors' storage software, and lined up VMware, Microsoft, Nutanix, Nexenta and Red Hat as partners.

SanDisk launched the InfiniFlash IF100, a flash-only array that runs other vendors' software and signed up software-defined storage vendor Nexenta as one of its first partners.

The hardware doesn't even have to be new to be a part of this trend. Curvature Solutions will even sell used storage bundled with DataCore SANsymphony-V, which was software-defined storage before it became cool.

With more hardware options available, vendors' unabashed claims of being software-defined began subsiding. "We're definitely not software-defined storage, since we include a rack-mounted appliance," said Brian Biles, founder and CEO of Datrium, when the startup launched with its DVX Server Flash Storage System in July. When was the last time you heard a storage vendor say that? Datrium does have DVX software, but it only runs on its storage. Still, vendors in the past few years might have tried to position that type of setup as software-defined storage.

Savage IO took the notion of a storage appliance up a few notches. The SavageStor 4800 is a 4U 48-drive system with 12-core processors, which supports Fibre Channel, InfiniBand and solid-state drives. It is designed for for high-performance computing, big data analytics and cloud storage. However, Savage IO doesn't develop software -- SavageStor must run either commercial storage management software or open source applications, such as Lustre, OpenStack or CentOS. "This is a Ferrari powertrain you can match up to your software if you need that type of performance," John Fithian, Savage IO's director of business development, said of SavageStor.

EMC ScaleIO Node and Cloudian HyperStore FL3000 appliance package applications originally designed as software-defined storage on hardware for customers who don't want to build their own storage. And that's apparently most customers.

"The mainstream storage buyer still wants an integrated appliance," said Ashish Nadkarni, IDC program director for enterprise storage and servers. "They want to benefit from software-defined storage, but aren't ready to trade that for the comfort of having it all on one box."

We certainly haven't heard the last of software-defined storage, or software-defined technology in general. But we expect the hardware that actually stores the data to receive its fair share of attention now.

Next Steps

Hot storage tech predictions for 2013

Our picks for hot data storage techs in 2014

Hot storage tech picks for 2015

This was last published in December 2015

Dig Deeper on Enterprise storage, planning and management

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

7 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

Which storage technology trends do you see emerging in 2016?
Cancel
I think we will start to see SSDs take a bigger role in storage / data centers. Just look at the 3D NAND, more organizations will definitely move towards it. 
Cancel
You should also look for more products/practices that link on-premise storage systems and operations to cloud-based resources. The hybrid model is perfect for tiering both secondary and backup data. As links to cloud services get built into storage arrays and hyper-converged systems, more companies will tap into the cloud a scale-out resource for less active data, including archive.
Cancel
I think the growing interest of the Flash storage which has already become a phenomenon will continue, but Hybrid Storage solutions to conquer larger sized accounts will find more traction in 2016 (Nimble Storage is a very good example).

The foundation of Hybrid Storage is based on mixing of Flash and Hard Drive, intends to get a place with large accounts, fiber channels, management console, capacity, etc... Though, a general trend flips towards Flash, but all applications do need the flash and those require performance as databases, transnational could find a good performance like the full flash, the latency would be more or less than a millisecond with Hybrid Storage.

For example, Nimble Storage displays a processing 15K IOPS. To achieve this, it relies on the Flash Adaptive forcing copying some data on SSD to replace the tiering. It also relies on NVRAM to save writing on SSDs. Add to this a triple parity for data protection. Overall, the solution looks promising and seems one step ahead of the competition.  
Cancel
Data reduction and data management will gain more momentum as companies deal with larger application scrips. Look for renewed interest in data classification, I.e., smart storage systems, as well
Cancel
I see companies adapting more towards SSD's and offering this option on all their plans.
Cancel
Well, most erasure codes are based on Reed-Solomon Error Correcting Codes, which takes us back to the days of X.25 packet switching networks. Describing erasure coding as similar to RAID can be confusing. Erasure coding operates on data objects, and RAID operates on disk blocks. Also, erasure coded objects are not "striped" across an array of disks residing in a single storage server. Erasure coding "fragments" data objects and creates "parity" fragments, then "disperses" the data and parity fragments among a certain number of disk drives located on different storage server nodes residing in the cluster. Erasure coding data objects does not rely on RAID controllers to protect against disk drive failure. Individual disk drives are never "rebuilt" when they fail, the missing data or parity "fragments" are re-created on different disk drives in the cluster nodes. Using JBOD storage in cluster nodes is all that is needed to erasure code data.
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close