Greg Blomberg - Fotolia

Amazon Simple Storage Service spurs on-premises storage

Amazon S3 has been such a smashing success that it will likely become the third pillar -- after block and file -- of storage protocols.

This article can also be found in the Premium Editorial Download: Storage magazine: Server-side flash technology lifts solid-state adoption:

Amazon Web Services celebrated its 10th anniversary in March of this year. As it closes in on becoming a $10 billion run rate per year enterprise IT juggernaut, the cloud computing platform has changed the IT landscape forever. Not only does Amazon Web Services (AWS) remain the largest infrastructure as a service/platform as a service (IaaS/PaaS) cloud platform on the market, its growth rate has been more than double that of some of the biggest IT companies in history.

In response to this success, major system vendors such as Cisco, Dell/EMC, HP and IBM are rapidly developing private cloud infrastructures as on-premises alternatives to AWS. The goal is to make infrastructure easier to utilize and manage for customers demanding cloud-like ease of use with public cloud-like prices.

Likewise, storage vendors must do their part to improve ease of use and lower costs. That's because much of the success of AWS has been due to Amazon Simple Storage Service (S3) leading the way. S3 consists of a set of object storage API calls available via the public cloud that enable any application to store and retrieve data objects from AWS.

The third pillar of storage protocols

Hundreds of commercially available applications natively support Amazon Simple Storage Service as a storage repository today, while many on-premises storage products support migrating data to S3 as secondary storage tier or for backup and disaster recovery purposes. The enormous success of S3 ensures that Amazon maintains the API such that incremental features do not break original capabilities. This API stability means other storage vendors can move beyond just supporting S3 as a back-end tier and actually support the S3 protocol as an alternative front-end ingest service.

I believe this trend will make Amazon Simple Storage Service a de facto storage standard, and it'll become the most popular protocol for object storage on the market. S3 will, when all is said and done, take its rightful place as the third pillar of storage protocols alongside block storage and file storage.

We've seen this happen before.

In 1984, NFS was a proprietary shared file protocol originally implemented by Sun Microsystems to allow Sun client machines to share file access across servers. This file sharing protocol rapidly became the industry standard due to its popularity, and it is still in use today. NFS is the first protocol supported on modern NAS systems, and most modern object storage devices typically support NFS as an alternative access method.

I believe this trend will make Amazon Simple Storage Service a de facto storage standard, and it'll become the most popular protocol for object storage on the market.

I believe every network-based storage device will also support some version of the Amazon Simple Storage Service protocol before AWS S3 reaches its 20th anniversary.

The key to S3's on-premises future

In order for Amazon Simple Storage Service to take off as an on-premises storage protocol, storage hardware and software vendors must continue to drive down the economic crossover point where the capital and operational costs of deploying S3-compatible on-premises storage are less expensive than AWS alternatives. That is, there will be a point, similar to the decision of whether to rent or own your home, at which it is less expensive to own your storage system.

The three critical factors that drive AWS S3 costs are retention time, frequency of access and quantity of data. S3 is very cost-effective for relatively static data, with prices ranging from $0.03-$0.07 per GB per month depending on the frequency of access and amount of data to be transferred. Where it gets expensive is when a customer exceeds limits on either the frequency of access or the amount of data transferred out of the cloud. On-premises S3-compatible vendors are claiming they can provide storage in the range of $0.01 per GB per month, but that is based on buying 75 TB upfront and amortizing that cost over three years. So it'll be critical that these on-premises vendors continue to drive down the economic crossover point and also provide analytical tools so customers can easily evaluate when it is better to own versus rent storage.

On-premises S3-compatible storage won't fully supplant cloud storage, of course, as there will always be use cases where cloud products are more attractive. These would include startup businesses without on-premises data centers, for example, or those organizations that need, but have not yet invested in globally distributed data centers.

S3 support today

In addition to hundreds of commercially available applications that support Amazon Simple Storage Service natively, I'm seeing customers rewriting in-house applications to support S3. As a result, they are seeking more S3-compatible devices from which to choose for on-premises storage. They most likely will not completely abandon AWS, but rather desire to build automatic workflow into their applications such that they can keep sensitive data in-house or eventually offload aging data to AWS. These customers could simplify their operations immensely by supporting only one storage protocol, S3, and simplifying the choice of whether to store this data locally or in the cloud.

Wouldn't it be great if S3 Glacier, a portion of the Amazon Simple Storage Service protocol, could actually breathe new life into the much maligned tape industry?

Many storage vendors, in the meantime, are starting to support Amazon Simple Storage Service as an ingest protocol at some level. A few of them are even making S3 compatibility a major strategic push.

Cloudian, for instance, is betting big on S3 by touting its HyperStore products as the most S3-compatible available from all object storage providers. In fact, the company offers a money-back guarantee that S3 applications will work with its products as seamlessly as they work with AWS.

Spectra Logic, meanwhile, offers both tape- and disk-based archive products behind an S3-compatible gateway. Wouldn't it be great if S3 Glacier, a portion of the Amazon Simple Storage Service protocol, could actually breathe new life into the much maligned tape industry?

A tricky part of a tape-based archive system is to get applications to handle the complex integration needed to support a tape library. The S3 Glacier protocol is a perfect fit for tape-based products. The S3 protocol gives tape archive products access to many more commercial applications than were ever available to tape libraries in the past.

Mainstream Tier 1 primary storage providers should consider getting in the Amazon Simple Storage Service game as well by adding native S3 protocol support to their products.

Mainstream Tier 1 primary storage providers should consider getting in the Amazon Simple Storage Service game as well by adding native S3 protocol support to their products. Even if the performance characteristics seem like a mismatch, remember that not long ago it was said that block devices should not have file services and vice versa. Now it is rare to see any Tier 1 storage provider that does not support both block and file access to the same device. To participate in the new era of unified storage, vendors will have to add S3 compatibility to their primary storage products.

Happy 10th anniversary, Amazon AWS! I'm sure your S3 protocol will be around to see a 30th anniversary, let alone a 20th. However, don't be surprised if in the future, more S3 storage is consumed on-premises than in the cloud, as a wide variety of innovative storage companies capitalize on your initial success.

About the author: 
Jeff Kato is a senior storage analyst at Taneja Group with a focus on converged and hyper-converged infrastructure and primary storage.

Next Steps

Take our Amazon Simple Storage Service quiz

Amazon S3, competitors cut cloud storage prices

Amazon S3 API stays ahead of the competition 

This was last published in June 2016

Dig Deeper on Storage vendors

PRO+

Content

Find more PRO+ content and other member only offers, here.

Join the conversation

8 comments

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

What does the future hold for Amazon Simple Storage Service?
Cancel
One of the biggest things we’ve struggled with was getting our data into S3 quickly and without breaking the bank. I suspect that many other organizations face the same problem, which is something that Amazon is going to need to deal with sooner rather than later.
Cancel
MY issue wit the on-site storage is taking into consideration a disaster recovery. How quick is the turn around to get your system back up and running vs. cloud based services. Example all the flooding going on in Texas. How many business have lost their systems?
Cancel
Hi Todd,

Thanks for your comment! Here's what our analyst Jeff Kato, the author of this piece, has in reply to your question:

"I’m not sure how many companies in Texas lost their data. But, Just because you have data on-site does not exempt you from having a sound DR strategy. Typically modern storage devices can replicate themselves to another device at another data center of yours. If you wanted you could replicate your on-premises data to the public cloud as well. On-site data centers are going to look and act very much like public clouds in the future. There are even companies that can do full DR services to the cloud– not just backing up the data to the cloud. Each company should perform their own cost-benefit analysis as to whether it better to host their own data on-site (because of performance, cost or regulatory concerns) or go to the cloud."
-Jeff Kato, Sr. Analyst & Consultant, Taneja Group
Cancel
I guess a lot depends on your companies stability and financial position. Some companies have looked at some cloud service and they are not cost effective. They have also looked into secondary locations and have said the same, too costly. Granted I would look more seriously into some of these because I would not want to lose almost 20 years business data. It comes down to who holds the purse strings and do they really know the ramifications if they lost everything on-site? They could say we have a tape back up but in case of a fire or flood, your tapes may even be damaged or destroyed..
Cancel
Well, Mr. Kato is right about the usefulness of building you own private, S3-compliant private storage cloud because keeping all of your data stored in a public cloud is a mistake. AWS S3 is already the de facto RESTful API for storing data in the cloud (public or private). Cloudian has the most complete implementation of the AWS S3 API, which includes all 51 operations. Cloudian also uses its S3-compliant API as its native API and not as a "connector" to its storage clusters.

Mr. Kato plays fast and loose with his use of protocol and API. NFS and SMB/CIFS are protocols. S3 is an API and not a protocol. All modern object-based storage vendors expose access to their storage using APIs. Everything Cloudian does is exposed through an API. In fact, you can use the AWS S3 SDK to write an app that will work without modification against a Cloudian HyperStore cluster. The future of storage is private-to-public hybrid storage using an S3-compliant API for your applications.
Cancel
For the people saying disaster recover is "too costly," I wonder how "costly" losing all their company data would be?
Cancel
S3 addresses cost concerns by offering multiple tiers of storage, from S3 standard with 99.999999999% durability, to S3 infrequently accessed (still with 99.999999999% durability, to Reduced Redundancy Storage with 99.99% durability, as well as Glacier with the same 99.999999999$ durability for archival storage. Still, the old adage of not putting all of your eggs into one basket holds true here as well.
Cancel

-ADS BY GOOGLE

SearchSolidStateStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close