ag visuell - Fotolia


Rethink the use of an object storage gateway for data migration

Software options that allow data to be written natively to object storage may be better for your organization than gateways that migrate data to object stores.

IT professionals often think an object storage gateway simplifies the process of migrating data to object storage. In reality, a gateway makes object storage accessible via standard file or block interfaces instead of a RESTful application programming interface. Today, the majority of object storage offerings have file and block gateways built in.

An object storage gateway is a target storage system that acts as a file or block storage cache before data moves to object storage. It converts NFS, SMB or iSCSI block data into RESTful API objects. Data must be pushed to gateways, which emerged as a way for IT managers to use object storage as a storage target drop-in without making any modifications to their application file systems to write to a RESTful API. They were not designed as a data migration tool. However, an object storage gateway adds value by frequently deduplicating and compressing data to minimize the storage consumed on the target object store, and to provide the same or similar performance as traditional block or file storage. Data reduction is quite useful in easing the cost of public cloud object storage and, to a lesser extent, private cloud object storage.

However, using gateways to transfer data to object storage requires the same process as any other data migration project: A minimum of 32 and 34 manual, labor-intensive steps to migrate files and iSCSI blocks, respectively. There is a reason data migration is considered one of the worst jobs in the data center; there is no automation and projects require significant amounts of planning, scripting and qualified expert professionals to implement them.

Gateways also become object storage choke points since all writes and reads must go through them. Throughput and IOPS are limited by what the object storage gateway can handle. If the gateway goes down, data is inaccessible unless there is a second or backup gateway in the environment. This complicates data protection, business continuity and disaster recovery because a gateway with all of the metadata must be available at all times to access data.

Prime object use case is secondary storage

Analyst Marc Staimer describes how the flat namespace of object storage makes it a good choice for storing large amounts of data.

Software that writes and reads natively to and from object storage comes in several flavors:

  • Data protection in the form of backup and recovery or data copy management software.
  • Archiving software.
  • Automated data migration software.
  • General business applications that also write natively to a RESTful API (typically, Amazon Simple Storage Service).

Software that writes natively to object storage has the following advantages:

  • Significantly reduces hardware, infrastructure, operations and management complexity.
  • Reduces costs, as there is no gateway hardware and no need to purchase, operate, manage or maintain a supporting infrastructure.
  • Direct access to data from the application.
  • Better overall performance of data stored on object storage (no latency of a gateway in between).
  • No gateways to involve in a technology refresh.

Migrating data to an object storage system is accomplished by software (Caringo, Data Dynamics and NTP Software, for example) that vacuums up data from where it originally resides and moves it to object storage based on preset policies. In this case, an object storage gateway is unnecessary. Note that the same data migration software can and will migrate data to a gateway if that is the preference, as well as to an object store.

Next Steps

Scale-out NAS takes on object storage vendors

Why object storage technology is a better choice than file storage for cloud apps

Object storage data protection is faster, stronger

Dig Deeper on Cloud object storage