Storage

Managing and protecting all enterprise data

Tomasz Zajda - Fotolia

Object storage use cases coming to a data service near you

Object storage, unlike traditional file and block storage, is well-suited to manage vast amounts of unstructured data.

Don't think you're using object storage? Think again. Object storage adoption and the growth of object storage use cases have rapidly expanded over the past five years, so there's a good chance you're already using it. And if not, trust me, given the advantages object storage brings to our data-intensive world, it will no doubt be coming to an application near you soon.

You probably first used object storage when you signed up for an online service, such as Facebook, Twitter or Spotify. These companies developed object storage architectures for their own use as the back end for storing massive amounts of unstructured data, such as photos, videos and songs. Unfortunately, access to these object storage architectures isn't available to the rest of us.

As companies started building their own internet applications, they quickly discovered that object storage was well-suited to their unstructured data storage requirements. That's largely because you can easily access object storage via a REST-based programming interface that lets applications use HTTP calls to manipulate data. Object storage also gives you a fully customizable and unlimited rich metadata index.

With object storage, Web 2.0 application developers avoid the complexities of traditional file and block storage systems and can organize huge unstructured data sets using auto tagging, auto categorization and data analytics. Traditional storage, by contrast, isn't well-suited to manage vast amounts of unstructured data. A recent Taneja Group survey found the top three challenges of traditional storage to be the following:

  • Lack of flexibility (42% of respondents): Traditional file storage appliances require dedicated hardware and often don't tightly integrate with collaborative cloud storage environments.
  • Poor utilization (39%): Traditional file storage requires too much storage capacity for system fault tolerance, which reduces usable storage.
  • Inability to scale (38%): Traditional storage products such as RAID-based arrays are gated by controllers and simply aren't designed to easily expand to petabyte storage levels.

An easy, on-demand, pay-as-you-go approach

Developers rapidly expanded their use of object storage with the introduction of the Amazon Simple Storage Service (S3) in 2006. Amazon S3 lets them build applications for back-end object storage, as it provides an easy way to get started with an on-demand, pay-as-you-go environment that scales as needed. While Amazon remains popular, not every organization wants its data in a public cloud, so many use third-party object storage platforms or work with service providers that leverage object storage to build applications.

Object storage from companies such as Cleversafe (now IBM Cloud Object Storage), Scality and Western Digital offers multiple deployment options. These options give customers the flexibility to store data in private, public or hybrid clouds. These offerings have also opened the door to several object storage use cases that have moved object storage beyond the realm of unstructured data storage for cloud-native applications. Taneja Group has identified the top object storage use cases and associated object storage capabilities:

  • File backup and archival storage (57% of respondents): This use case likely tops the survey's list because there's so much demand for scalable file backup and archival storage for online document archival and cost-effective long-term data retention for compliance purposes. Object storage is ideal for large-scale unstructured data storage because it easily scales to petabytes and beyond by simply adding storage nodes. This approach eliminates the performance bottleneck that limits the single- and dual-controller designs of traditional file storage. Additionally, object storage enables hardware independence, so companies can increase capacity by adding hardware.
  • Storage as a service (44%): Object storage gives service providers a cost-effective way to manage backups using a secure, highly scalable and multi-tenant architecture. And because this is "a service," companies save on personnel, hardware and data center facility costs. An IT administrator simply rents storage space on a cost-per-gigabyte-stored and cost-per-data-transfer basis.
  • Big data analytics (35%): Object storage is designed for large data sets, making it ideal for big data analytics. In a recent survey, we found that about 30% of respondents aggregate 100 TB or more for big data use cases, with the amount of data growing substantially every month. However, you must tightly integrate object storage with low-latency storage and high-performance compute to support big data analytics and artificial intelligence for data correlation and interpretation.
  • Secure file sharing (35%): Object storage's distributed architecture inherently facilitates file sharing. Data slicing, erasure coding and geo-dispersal technologies enable automatic, efficient and secure data replication and access across remote sites. Most object storage products also offer remarkable fault tolerance that protects against site, node and multiple disk failures.

Object and the secondary world

This powerful combination of use cases and capabilities has created demand for more object storage adoption within secondary storage environments. As a result, object storage is now the dominant storage foundation for next-gen data protection, archiving and analytics apps, especially those running in the cloud. Cohesity DataProtect is a good example of a next-gen data protection offering. Object storage also has permeated all secondary data use cases. Here's why.

Companies usually have several secondary storage products or services. These include backup and recovery applications, tape archival systems, data deduplication appliances, replication software for multisite compliance and disaster recovery, file services and cloud-based archival. This multiplicity leads to high licensing costs as well as storage silos, and creates storage complexity and higher operational costs because administrators must manage, maintain and refresh multiple systems. Also, if you want to move data from one storage system to another, manual migration takes time and puts a burden on data center resources.

To reduce secondary storage complexity, dramatically simplify data protection and improve overall data management, companies must combine secondary storage use cases and workloads. This requires scalability built on the core strengths of object storage, which also delivers the critical flexibility needed to support multiple use cases.

To provide all these capabilities, data protection and secondary storage vendors are rapidly moving to object-based architectures with distributed file systems. They're also adding multiprotocol and seamless multicloud support, global data deduplication and comprehensive data analytics. As these data protection and secondary data management products come to fruition and mature, the result will be greater secondary data management simplicity without compromising functionality through object storage use cases.

Article 7 of 7

Next Steps

Find out how an object architecture reduces storage complexity

Object systems can help users reap rewards

The pros and cons of object storage you should know about

Dig Deeper on Storage architecture and strategy

Get More Storage

Access to all of our back issues View All
Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close