BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
Although object-based storage refers to a class of storage with specific characteristics, when evaluating products from the leading object storage vendors, it quickly becomes clear that there is more than meets the eye.
Competing object storage platforms have vastly different capabilities, so certain systems will be more suitable than others for particular functions. To help you sort through the different options, this roundup of object storage offerings examines how these products differ by access method, ability to integrate with the cloud, deployment options and data security.
Caringo Swarm is unified object storage software that runs on standard server hardware and creates a scalable, highly available pool of storage resources. A Swarm cluster consists of at least three standard storage servers that enable data protection, as well as a management server.
Customers can purchase servers separately and run them with the Caringo Swarm software, or they can buy appliances that bundle the software and hardware. The management server can run as a virtual machine (VM). It handles cluster-level tasks, such as overall administration, network image booting and rapid metadata searches.
Caringo Swarm can be encrypted at rest via an administration setting. Data is encrypted via a private/public key mechanism that is managed by the storage administrator. Caringo has no knowledge of the key or a back door into the data.
Data is also encrypted in transit via Caringo's FileFly SMB access product. Swarm supports both replication and erasure coding, which can be selected on an object-by-object basis.
Swarm supports SMB and NFS file sharing protocols. Understanding that the public cloud is a critical part of the emerging hybrid IT status quo, Swarm's feeds capability can integrate directly with Azure block blobs and Amazon Web Services (AWS). Caringo plans to support other cloud services.
The software is priced as a perpetual or annual subscription on a per terabyte basis.
DataDirect Networks Inc.
DDN Web Object Scaler (WOS) is a formidable choice among the leading object storage vendors, but it comes with tradeoffs. Its object storage platform enables organizations to build large scale-out storage clouds.
WOS offers broad native object protocol and access support for Amazon Simple Storage Service (S3), Swift, REST, Java, Python and the C++ API. WOS also provides deep application integration with tools such as Commvault, and it supports Lustre and IBM's Spectrum Scale -- formerly General Parallel File System. WOS storage nodes can be distributed geographically to build a global storage cloud that is latency-aware for optimal storage and retrieval.
WOS nodes are self-contained appliances that feature a shared-nothing architecture. A cluster is a networked grouping of WOS nodes. Nodes are deployed in zones, which can be a logical organization within a data center or dispersed over geographical locations. These zones enable various data protection policies to be applied across nodes. The Amazon S3 interface and other interfaces can be deployed as part of a WOS node or as discrete gateways, depending on customer scaling and performance requirements.
WOS is available in an array of appliance choices, as well as in a software-only, bring your own hardware version that can run on physical or virtual machines. All WOS installations support customizable k + m erasure coding within the system itself, and they include the ability to distribute shards across systems. Additionally, DDN has a multilayered approach to multisite, which first distributes shards across sites, and then further protects them within a WOS node via internal erasure coding.
Unlike other, more general-purpose products from other object storage vendors in this roundup, DDN targets WOS as a complement to high-performance computing efforts.
With performance as the driving factor, WOS is not equipped with some data protection capabilities that could negatively impact performance. For example, encryption at rest is not available, although the product can encrypt data in flight between nodes.
NFS and SMB/CIFS capabilities can be added via the optional WOS Access feature. The DataMigrator WOS add-on enables customers to migrate data back and forth between an on-premises environment and the public cloud.
For the software-only version, WOS is priced on a per terabyte basis.
Dell Elastic Cloud Storage (ECS) is a general-purpose product that natively supports Swift, Amazon S3, NFS and Hadoop Distributed File System (HDFS), as well as Dell EMC's Atmos and Centera protocols. Sites that require SMB or CIFS support will have to use a gateway, which Dell EMC can provide, on a Windows server using Amazon S3.
HDFS enables in-place analytics without requiring the compute host to copy and replicate the data on direct-attached storage. This can improve the overall efficiency of the analytics process, making ECS suitable for Hadoop environments.
ECS supports both data at rest and data in transit encryption, with the latter requiring an additional license to enable. Encryption at rest uses a standard AES-256 algorithm, and it can be implemented on the namespace level or the bucket level.
ECS is available as software, via an appliance purchase from Dell EMC; through public cloud providers, such as Virtustream Inc., Iron Mountain, Danube IT Services and Vodafone Cloud and Hosting; or through a hybrid cloud managed service called ECS Dedicated Cloud. ECS Dedicated Cloud is on-demand ECS storage managed by Dell EMC that runs on dedicated, single tenant servers hosted in a Virtustream data center. It is available in hybrid and fully hosted multisite configurations.
Erasure coding protects against disk or node failures. ECS also supports replication, with three copies of data being written to distinct locations. ECS includes a mechanism that increases storage efficiency as you add more sites.
In a georeplicated environment with multiple sites, ECS replicates chunks from the primary site to a remote site to provide high availability. However, this can lead to a large overhead of disk space.
To prevent this, ECS uses a technique to reduce overhead, while preserving high availability. As the number of linked sites increases, the ECS algorithm becomes more efficient in reducing the overhead. ECS also includes geoprotection across multiple sites, enabling recovery from a full-site disaster and temporary network outages.
ECS capacities can range from 360 TB to 7.8 petabytes (PB) in a single rack configuration. This capacity is offered via hard disk drives only. ECS does not currently support flash media, which may make it undesirable for IT organizations that require high sustained IOPS throughout, although these kinds of workloads are not generally a good fit for object storage systems.
ECS software pricing is based on capacity.
Using extensive research into the object storage market, TechTarget editors focused on well-established storage vendors that sell both object storage software and appliances. Our research included data from TechTarget surveys, as well as reports from other respected research firms, including Gartner.
Hitachi Data Systems Corp.
The HDS Hitachi Content Platform (HCP) is a general-purpose object storage system that enables IT organizations and cloud service providers to store, share, sync, protect, preserve, analyze and retrieve file data from a single system. It natively supports access protocols, including NFS, CIFS, Amazon S3, REST, HTTP, HTTPS, WebDAV, Simple Mail Transfer Protocol and the Network Data Management Protocol.
Hitachi Data Instance Director is tightly integrated with HCP and provides advanced email archive capabilities for Microsoft Exchange environments. This is a unique capability among offerings from the leading object storage vendors.
HCP is available as a fully integrated appliance with nodes connected to arrays via commodity-based hardware, or as a software-only option that is delivered as a managed service via a public cloud or as a VM. In an HCP-VM system, each node runs in a VM on a VMware vSphere host. Depending on the capabilities of the underlying hardware, more than one HCP-VM node can run on a single vSphere host. HCP relies on the VMware Infrastructure to provide highly available and fault-tolerant storage.
The HCP software offers both data in flight and data at rest encryption, protecting objects and metadata for all HCP configurations, including cloud destinations.
However, there are subtle capability differences that depend on the type of storage used. Compression and deduplication of encrypted objects only work with bulk storage supplied by local disks or SAN disks, not when sending content to HCP storage nodes or to public cloud services. For data in flight encryption, HCP nodes use Secure Sockets Layer/Transport Layer Security (SSL/TLS) to establish a secure transport between the client end user or application and HCP. HCP ships with a default, self-signed certificate.
HCP's data protection features include RAID 6, erasure coding and replication. RAID 6 enables a node to operate even when two concurrent disk failures have occurred. Erasure coding protection means that at least six concurrent disk failures can be sustained. Replication enables one to four copies of objects -- depending on the number of storage nodes -- to be made to ensure integrity and availability.
HCP supports up to 800 PB on-premises and virtually unlimited off-premises capacity in the public cloud. A six-site, geodistributed erasure coding cluster supports up to 4.8 exabytes. At the low end, the smallest production deployments are four virtual nodes with 4 TB total capacity.
HCP's software licensing costs are based on a combination of storage type -- direct-attached, erasure-coded storage or cloud storage -- and capacity.
The IBM Cloud Object Store (COS) is a flexible, scalable and simple cloud storage technology. IBM claims COS can provide a low total cost of ownership and that configurations can achieve greater than 99.99999% availability when properly deployed.
COS provides encryption in flight and at rest, and it enables object storage use cases that span from new cloud applications to active archive or content repository to backup or storage as a service. IBM COS supports deployment into on-premises environments, the cloud and hybrid scenarios that span both.
COS uses IBM's patented SecureSlice technology to encrypt each object using standard AES-128 or 256-bit encryption and a SHA-256 hash as it disperses the data to the storage devices.
On the protocol side, COS supports Amazon S3, REST, NFS and CIFS, although CIFS is currently enabled via a third party. IBM plans to add native CIFS support in 2017.
COS can be deployed as software-only licensed by raw terabyte, as an appliance with combined hardware and software, or as a cloud license integrated with IBM Bluemix. Management COS storage nodes can also be run as VMs or in Docker containers, but data storage nodes and NFS access nodes must be run as physical implementations in a production environment.
COS uses a distributed erasure coding mechanism that transforms data on ingest, slices that data into pieces and distributes those pieces of data across the set of available nodes. No single node has all the data, which makes it safe and less susceptible to data breaches, while a user only needs access to a subset of the slices to fully retrieve the stored data.
COS supports implementations in the cloud that start at 1 GB, and it is expandable in 1 GB increments. For on premises, the smallest deployment is 300 TB. For on-premises deployments, the increments are dependent on configurations, but are typically added in volumes of 12 disks.
On-premises software is priced by terabyte.
The last of the object storage vendors on the list, Scality is focused on enterprise IT and cloud service providers.
Scality has created its RING distributed object storage software for deployment on commodity hardware. RING provides Amazon S3-compatible object storage interfaces for new cloud applications, and standard NFS and SMB file storage interfaces for legacy applications. It offers scale-out capacity and performance to store and protect data at a single data center spanning distributed data center deployments. It includes a REST-based API in addition to the Storage Network Industry Association-standard Cloud Data Management Interface (CDMI) API, which is intended to help streamline cross-vendor object storage integration.
The product provides an Amazon S3-compatible API with comprehensive support for the AWS Identity and Access Management API for secure multitenancy. It also includes a CDMI REST API, a native key/value REST interface, and a native NFS and SMB interface to object storage.
Scality RING's Amazon S3 interface provides optionally enabled AES-256 bit encryption, with integration of external key management systems. It also provides encryption in flight of secure connections of external requests over HTTP/TLS, with support for official certificate authority-provided SSL certificates and internal SSL secure connections between components.
Scality RING is pure software that runs on general-purpose physical x86 hardware and VMs, and it can be purchased on appliances offered by partners Cisco and Hewlett Packard Enterprise. It provides replication-based data protection with one to six stored copies and erasure coding with a variable number of stripes and parities.
The system can use replication and erasure coding simultaneously, and either or both methods can be deployed to optimize different applications, object storage use cases or data types. It can also be deployed across multiple data centers with geographic replication and geodistributed erasure coding.
Licensing is based on usable capacity, so customers pay only for the actual data stored, regardless of the data protection policies in use. For example, the data can be triple replicated, but that is not counted against the capacity license. Licenses are perpetual or for the hardware's lifetime.
The minimum production deployment is six servers, although fewer servers can be used for proof of concept and testing purposes. Six servers enables the use of the minimal erasure coding schema of 4 + 2 -- four data chunks plus two parity chunks -- to tolerate at least two disk or server failures, while still preserving data availability. Servers can have as few as six disk drives or as many as 90 drives in a 4U appliance in some new form factors.
Object storage addresses limitations of other storage methods
How companies can use object storage to recover from ransomware attacks
The advantages of object storage over file storage for cloud applications