During the Red Hat Summit last week, the vendor provided roadmaps for its Ceph and Gluster storage software products...
including unified management technology and expanded protocol support for Ceph.
Red Hat demonstrated the new unified capabilities that will allow users to install, manage and monitor Red Hat's Gluster and Ceph storage. Additional capabilities targeted next year for Red Hat Ceph Storage include support for iSCSI and NFS and improved multi-site capabilities, according to Neil Levine, a Red Hat director of product management.
"Ceph is well known for its robustness," Levine said, "but translating that such that people don't have to become Ceph gurus to experience all that benefit is definitely the main challenge."
Themes in Red Hat Gluster storage for the upcoming 3.2 and 4.0 releases, due next year and beyond, include "NAS on demand" or "file as a service" with dynamic provisioning capabilities, compression and inline deduplication, according to Sayan Saha, head of project management for Gluster.
Because Red Hat's software products are based on open source code from the Ceph and Gluster project communities, the exact timing of some features may depend on work in the upstream community, Saha noted.
"They have a terrific roadmap," said Ryan Nix, a senior IT consultant for the School of Education and Social Policy at Northwestern University, which uses open source Gluster for basic file storage. Nix said the school is working with Red Hat on a bundled package now that developers are moving into areas such as Red Hat's OpenShift cloud computing platform as a service.
Several Red Hat customers who do not use its subscription-based storage software said they could foresee at least checking out Ceph and Gluster, especially because most already use Red Hat Enterprise Linux (RHEL), if not more of the company's open source-based software products.
Amy Brown, a lead Linux engineer in the open systems division at American International Group, said the insurance company primarily uses EMC storage. But she said she could envision evaluating Ceph or Gluster to store rarely accessed data to save on hardware, licensing and maintenance costs, because Red Hat's storage software can run on commodity hardware.
"Those licensing and maintenance fees are killers," especially for storing data that "you're literally just parking," said Brown. "There's been a huge shift in the last two years. You notice your storage when you start keeping big data. It never shrinks. It only grows."
Red Hat's software-defined storage that can scale out to accommodate containers is of interest to Dennis Avondet, a senior manager of technology at an independent software vendor in the health care industry, which he asked not to identify.
"We want to go down that path," Avondet said. "It's just that we have a large monolithic application, and it's trying to figure out how to adjust those types of containers to fit in that type of model without rewriting the whole application."
Saha said Red Hat's RHEL Atomic Host supports NFS, and customers can use NFS to mount Gluster from inside a container and start storing data. He said Red Hat is also looking into a "kind of hyper-converged layer where you can actually run Gluster inside a container and then serve out storage that is in there to other containers that are running in the same set of servers.
"That way, you don't need two separate layers," Saha said. "It's very similar to how it's done for VMs. We are seeing some customers pushing the envelope and trying to bring it to that level."
Levine addressed the container "buzzword" at the end of his Ceph roadmap session, saying it's "the only way to make storage truly sexy." He cited the Ceph RADOS Block Device driver for Kubernetes (an open source orchestration system for Docker containers), an Amazon Simple Storage Service back end for OpenShift, and a community project to get Ceph to run inside containers.
Another hot topic is ongoing work on the Ceph file system (CephFS). Levine said one project is making great progress on fixing bad bugs, and another is focusing on file system repair tools to prevent data corruption. He said once the open source CephFS is ready in the upstream community later this year or early next year, Red Hat will consider product options.
Other possibilities for the next release of Ceph, code named "Tufnell," that Levine listed include:
Red Hat Ceph Storage 2.0 (Probable version number)
- Mirroring capabilities for managing virtual block devices in multiple regions
- Support for deploying the Ceph Object Gateway in an active/active configuration across multiple sites
- Performance consistency, through intelligent scrubbing policies and improved peering logic to reduce the impact of common operations on the overall cluster
- New backing store for Ceph open storage daemons to boost performance on existing and modern drives, such as solid-state drives and Seagate's Kinetic key-value drives
- Guided repair to help administrators fix corrupted data
- Alerting to notify administrators of critical issues via email or SMS
Levine added that the RHEL OpenStack Platform (OSP) with Ceph has ongoing work on quality of service capabilities, volume migration between slow and fast pools of storage, and disaster recovery improvements.
The RHEL-OSP 7 roadmap calls for support of a tech preview of OpenStack Manila file share services this summer, and full Manila support to follow in RHEL-OSP 8 later this year or early next year, according to Saha. He said Red Hat plans to ship GlusterFS drivers this summer to facilitate Gluster's use as a back end for Manila.
The Red Hat Gluster Storage roadmap that Saha laid out during his session at last week's conference included:
Red Hat Gluster Storage 3.2 (code named "Fundy")
Planned: First half of 2016
- Support for open source GlusterFS 3.8, RHEL 6, RHEL 7
- Dynamic provisioning of volumes
- SMB 3.0 support
- At-rest data encryption
RHGS 4 (code named "Gir")
- Support for GlusterFS 4, RHEL 7
- Compression, inline deduplication
- Next-generation replication
- Quality of service
- Client-side caching
- Parallel NFS
- New UI, Gluster REST API
Red Hat not expected to combine Ceph and Gluster in one product
Consultant sees use for Red Hat storage software in secondary storage
Red Hat renames storage software products