Greg Blomberg - Fotolia
- Jeff Kato, Taneja Group
Amazon Web Services celebrated its 10th anniversary in March of this year. As it closes in on becoming a $10 billion run rate per year enterprise IT juggernaut, the cloud computing platform has changed the IT landscape forever. Not only does Amazon Web Services (AWS) remain the largest infrastructure as a service/platform as a service (IaaS/PaaS) cloud platform on the market, its growth rate has been more than double that of some of the biggest IT companies in history.
In response to this success, major system vendors such as Cisco, Dell/EMC, HP and IBM are rapidly developing private cloud infrastructures as on-premises alternatives to AWS. The goal is to make infrastructure easier to utilize and manage for customers demanding cloud-like ease of use with public cloud-like prices.
Likewise, storage vendors must do their part to improve ease of use and lower costs. That's because much of the success of AWS has been due to Amazon Simple Storage Service (S3) leading the way. S3 consists of a set of object storage API calls available via the public cloud that enable any application to store and retrieve data objects from AWS.
The third pillar of storage protocols
Hundreds of commercially available applications natively support Amazon Simple Storage Service as a storage repository today, while many on-premises storage products support migrating data to S3 as secondary storage tier or for backup and disaster recovery purposes. The enormous success of S3 ensures that Amazon maintains the API such that incremental features do not break original capabilities. This API stability means other storage vendors can move beyond just supporting S3 as a back-end tier and actually support the S3 protocol as an alternative front-end ingest service.
I believe this trend will make Amazon Simple Storage Service a de facto storage standard, and it'll become the most popular protocol for object storage on the market. S3 will, when all is said and done, take its rightful place as the third pillar of storage protocols alongside block storage and file storage.
We've seen this happen before.
In 1984, NFS was a proprietary shared file protocol originally implemented by Sun Microsystems to allow Sun client machines to share file access across servers. This file sharing protocol rapidly became the industry standard due to its popularity, and it is still in use today. NFS is the first protocol supported on modern NAS systems, and most modern object storage devices typically support NFS as an alternative access method.
I believe every network-based storage device will also support some version of the Amazon Simple Storage Service protocol before AWS S3 reaches its 20th anniversary.
The key to S3's on-premises future
In order for Amazon Simple Storage Service to take off as an on-premises storage protocol, storage hardware and software vendors must continue to drive down the economic crossover point where the capital and operational costs of deploying S3-compatible on-premises storage are less expensive than AWS alternatives. That is, there will be a point, similar to the decision of whether to rent or own your home, at which it is less expensive to own your storage system.
The three critical factors that drive AWS S3 costs are retention time, frequency of access and quantity of data. S3 is very cost-effective for relatively static data, with prices ranging from $0.03-$0.07 per GB per month depending on the frequency of access and amount of data to be transferred. Where it gets expensive is when a customer exceeds limits on either the frequency of access or the amount of data transferred out of the cloud. On-premises S3-compatible vendors are claiming they can provide storage in the range of $0.01 per GB per month, but that is based on buying 75 TB upfront and amortizing that cost over three years. So it'll be critical that these on-premises vendors continue to drive down the economic crossover point and also provide analytical tools so customers can easily evaluate when it is better to own versus rent storage.
On-premises S3-compatible storage won't fully supplant cloud storage, of course, as there will always be use cases where cloud products are more attractive. These would include startup businesses without on-premises data centers, for example, or those organizations that need, but have not yet invested in globally distributed data centers.
S3 support today
In addition to hundreds of commercially available applications that support Amazon Simple Storage Service natively, I'm seeing customers rewriting in-house applications to support S3. As a result, they are seeking more S3-compatible devices from which to choose for on-premises storage. They most likely will not completely abandon AWS, but rather desire to build automatic workflow into their applications such that they can keep sensitive data in-house or eventually offload aging data to AWS. These customers could simplify their operations immensely by supporting only one storage protocol, S3, and simplifying the choice of whether to store this data locally or in the cloud.
Many storage vendors, in the meantime, are starting to support Amazon Simple Storage Service as an ingest protocol at some level. A few of them are even making S3 compatibility a major strategic push.
Cloudian, for instance, is betting big on S3 by touting its HyperStore products as the most S3-compatible available from all object storage providers. In fact, the company offers a money-back guarantee that S3 applications will work with its products as seamlessly as they work with AWS.
Spectra Logic, meanwhile, offers both tape- and disk-based archive products behind an S3-compatible gateway. Wouldn't it be great if S3 Glacier, a portion of the Amazon Simple Storage Service protocol, could actually breathe new life into the much maligned tape industry?
A tricky part of a tape-based archive system is to get applications to handle the complex integration needed to support a tape library. The S3 Glacier protocol is a perfect fit for tape-based products. The S3 protocol gives tape archive products access to many more commercial applications than were ever available to tape libraries in the past.
Mainstream Tier 1 primary storage providers should consider getting in the Amazon Simple Storage Service game as well by adding native S3 protocol support to their products. Even if the performance characteristics seem like a mismatch, remember that not long ago it was said that block devices should not have file services and vice versa. Now it is rare to see any Tier 1 storage provider that does not support both block and file access to the same device. To participate in the new era of unified storage, vendors will have to add S3 compatibility to their primary storage products.
Happy 10th anniversary, Amazon AWS! I'm sure your S3 protocol will be around to see a 30th anniversary, let alone a 20th. However, don't be surprised if in the future, more S3 storage is consumed on-premises than in the cloud, as a wide variety of innovative storage companies capitalize on your initial success.
About the author:
Jeff Kato is a senior storage analyst at Taneja Group with a focus on converged and hyper-converged infrastructure and primary storage.
Take our Amazon Simple Storage Service quiz
Amazon S3, competitors cut cloud storage prices
Amazon S3 API stays ahead of the competition
- Renting vs. purchasing cloud storage –Hitachi Vantara
- 2012 Trends to Watch: Storage –ComputerWeekly.com
- The Economic Value of Hitachi Content Platform Storage –Hitachi Vantara
- Realize True Hybrid Cloud –Pure Storage
Dig Deeper on Public cloud storage
Data migration specialist Datadobi adds S3-to-S3 support
AWS storage changes the game inside, outside data centers
Get to know these AWS hybrid cloud storage and architecture services
Storage Gateway basics to set an AWS hybrid storage strategy