News Stay informed about the latest enterprise technology news and product updates.

Vendors target storage for containers with DevOps in mind

Enterprises have new tools to manage storage for Docker containers in production. Legacy vendors and startups are adding techniques for data protection, mobility and persistent storage.

Data protection and persistence issues have kept Docker containers on the fringes of enterprise storage, but that...

is changing. Legacy storage vendors and startups are launching products to make it easier to create and manage storage for containers.

Traditional array vendors are staking out territory in this new aspect of software-defined storage, driven by enterprises' desire for a more agile infrastructure to power data center operations.

Dell, EMC, Hitachi Data Systems, Hewlett Packard Enterprise and NetApp have rolled out support for containerized applications on their storage arrays. Hyper-converged pioneer Nutanix also got in on the act, adding Acropolis Container Services as part of its Acropolis data fabric.

Startups including Coho Data Inc., Datera Inc., HyperGrid Inc. (formerly Gridstore) and Zadara Storage have integrated orchestrated deployment of their storage for containers. And open source stealth entrants are emerging with claims that their tools will enable a transition for containers from ephemeral applications to persistent storage.

The flurry of vendor activity corresponds to a desire to expand container use cases, said Henry Baltazar, a research director of storage at IT analyst firm 451 Research.

"Containers are in the middle of a transition period," Baltazar said. "A lot of the deployment so far has been using containers as ephemeral infrastructure. The biggest change has been the focus on persistent storage for holding the data. With persistence, if something in a container dies, you can bring it back and have it be in the right place."

Primary storage for containers needs help for persistence

Docker was conceived as a lightweight alternative to virtualization. A container is short-lived, spawned to execute a specific function and then delete itself once the task is completed. Development teams use containers to build and deploy distributed applications, sometimes running containers and virtual machines (VMs) side by side.

Generally, IT admins who have coped with a sprawling virtual machine (VM) farm will find it much easier to manage Docker containers. Managing storage for containers has been another issue, though.

"People like the fact that containers leave no trace afterward," Baltazar said.  "They provide a good way to kick up processes almost instantaneously. You could have hundreds of containers on a server, and as soon as the work is done, the container goes away and resources are freed up."

However, moving containerized data between nodes presents challenges. Whereas a VM can be moved in its entirety, a migrated container natively retains only an application's logic, not its data. That is a huge challenge for providing storage for containers.

As the technology matures, enterprises are experimenting with containers in production. Companies in financial services and related industries spawn log-in containers to help verify online users, for example. Once user authentication is complete, the container gets zapped and consumed resources return to the grid.

Scott Sinclair, a storage analyst at Enterprise Strategy Group, said 2016 has ushered in a "significant adoption cycle" for Docker.

"We are starting to see more and more organizations taking a look at how they can use containers," he said. "There's tremendous excitement by vendors, too. It's rare for me to talk with a storage vendor that doesn't have, or is planning to have, an offering for Docker."

Storage vendors take varied approaches to manage Docker containers

EMC, via its Project REX-Ray, was the first legacy storage vendor to make available persistent storage for containers. REX-Ray is an open source abstraction layer developed by EMC {code}, part of the vendor's emerging technologies division. An updated version of REX-Ray was included with the release of Docker 1.7 in June.

Here's how it works: A container runtime engine needs a certain file or folder. The request gets passed through to REX-Ray, which searches available storage and delivers the volume to its destined container host. REX-Ray was designed with EMC ScaleIO and XtremIO storage in mind, but it's not limited to those platforms. EMC competitors could access the interface by writing their own REX-Ray-compatible RESTful APIs, said Josh Bernstein, an EMC vice president of technology.

"The biggest use case for REX-Ray isn't with our storage. It's on top of Amazon Web Services," Bernstein said. "We even had one of our customers contribute a REX-Ray plug-in for Google Compute Engine."

Two REX-Ray-related components this year further EMC's container focus. The first is known as libStorage, billed as a common storage library for use by Docker and other container runtimes. The other is Polly -- short for short for polymorphic volume scheduling -- which automates container storage policies for allocation, chargebacks and data security.

"Our goal is to treat container storage as a first-class citizen and a finite resource," Bernstein said.

Others have also tried to provide friendlier storage for containers. HyperGrid has de-emphasized hyper-converged hardware in favor of a container-based utility software model. HyperGrid in July merged with open source container startup DCHQ, whose product development includes a plug-in for EMC REX-Ray.

HyperGrid plans to continue selling grid-based storage hardware, but the vendor's future growth increasingly will be tied to Docker, CEO Nariman Teymourian said.

"We consider the appliance to be commoditized," Teymourian said. "We see the industry moving to completely isolated persistent storage that is mapped to containers running on machines, whether physical or virtual."

Hewlett Packard Enterprise (HPE) said it will bundle Docker Engine across its new line of servers and hyper-converged gear later this year. That includes HPE Converged Architecture 700 and Hyper Converged 380 appliances. HPE also introduced a Docker-integrated Native Volume Plugin to launch persistent container storage on 3PAR StoreServ all-flash storage arrays.

Hitachi Data Systems is supporting Docker and vSphere Integrated Containers in its all-flash Unified Compute Platform HC V240 hyper-converged system. IBM added Bluemix Container Service to its Bluemix platform as a service, which runs on IBM SoftLayer cloud storage.

End users are driving storage vendors to accelerate Docker support, said Val Bercovici, the CTO at NetApp's SolidFire all-flash platform. And those users are developers rather than traditional storage administrators.

"When it comes to actual data persistence or state -- no one who uses containers calls it storage -- vendors have to consider a whole different set of semantics," Bercovici said during a session at the Flash Memory Summit in August. "Once data gets to a highly containerized environment, there are different vocabularies and very different expectations on how to interact with the data."

NetApp Docker Volume Plugin (nDVP) lets users choose their preferred Docker orchestration framework on NetApp All-Flash FAS, EF-Series and SolidFire SF Series all-flash arrays.

"We designed nDVP to be very flexible with our storage. The idea is to give you one workflow interface for managing persistent storage in any container orchestration environment," Bercovici told SearchStorage in a separate interview.

Where do container startups fit?

Storage-focused startups are coming up with imaginative techniques to manage Docker. Containers and storage requirements are becoming more closely linked, with an emphasis on data services and self-service provisioning.

Newcomers such as Portworx, Rancher Labs, Robin Systems and StorageOS use methods that are different, yet with some similarities, to handle storage for containers.

  • Portworx PWX is file system software that runs inside a Docker container. PWX mounts elastic block storage that can be shared with other containers.
  • Rancher Labs container software runs in a VM to provide persistent storage services directly on a host.
  • Robin Systems Containerization Platform includes "container-aware block storage" and fabric controller to orchestrate application manifests, application lifecycle, clones and snapshots.
  • StorageOS uses traditional storage protocols to present persistent storage to its data plane. It also permits users to attach external storage. The StorageOS container platform features enterprise tools for data reduction, quality of service, snapshots and replication across multiple tiers.

A developer's perspective

These emerging container products are promising, but developers may still need to create a workaround to manage Docker containers. "It's mostly just trying different things to see what works," said Laurie Kepford, a cloud DevOps engineer at Irvine, Calif.-based Panoramic Software Inc.

The Scalr enterprise management platform provides high availability to three Rancher pool instances. A dozen containers that are spread across three servers handle continuous log monitoring for open source Graylog system, backed by mounted GlusterFS file storage for failover.

Finding the best configuration was a matter of trial and error, Kepford said. She switched to Rancher Labs after initially using a REX-Ray plug-in from DCHQ.

"The way I have it now with Rancher, my containers can move from server to server, but my data is always there when I need it," Kepford said.

Docker containers so far have revolved primarily around Linux distributions, but containers could soon get a boost within Windows shops. Enterprise-ready Docker containers are among the enhanced storage features in Windows Server 2016, due for release in late September.

Windows Server 2016 will give users two options to run containers: Windows Servers containers and Hyper-V containers. Instead of an abstraction layer, Windows Server containers share resources of the operating system kernel, while Hyper-V containers are hosted inside VMs at the hypervisor level.

When Microsoft throws its weight behind new technologies, it tends to signal the start of broader adoption among data centers, Baltazar said.

"I think we'll start seeing more containers serve general-purpose storage over the next couple years," he said. "Docker also is creating its own public marketplace where users can register their own container applications. That will make getting Docker applications as easy as downloading an app to your iPhone."

Next Steps

Ways to protect you data in Docker from disasters

Docker usage grows, but few take full advantage of its features

Preparing your IT system for Docker on Windows

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

3 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

How do you manage Docker containers to provide persistent storage?
Cancel
Seems like none too soon to add storage and security functionality to containers. I have to wonder, though, how much people are counting on EMC, what with the acquisition and all. Even if the project stays and the people aren't laid off, how many of them are going to hang around?
Cancel
Since I was interviewed for this article, I have successfully switch to using AWS Elastic File Store "EFS" for my persistent storage. I would also like to correct the comment about my Graylog system. It uses about a dozen containers, spread across three servers. I am also using SCALR instead of AutoScaling to manage my Rancher Pool instances. -- Laurie Kepford
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close