benedetti68 - Fotolia

AWS storage tackles more traditional enterprise capabilities

Amazon's latest wave of enterprise storage enhancements includes a replication SLA for S3, faster snapshot restores for EBS and data deduplication for FSx for Windows File Server.

AWS is working to give cloud customers capabilities they expect from on-premises enterprise storage.

AWS this week rolled out a series of enhancements designed to improve performance and reduce costs. While traditional IT vendors often copy features from public cloud storage these days, AWS is using on-premises storage for inspiration.

The enterprise storage enhancements include a replication SLA for S3, faster snapshot restores for EBS, and data deduplication for FSx for Windows File Server.

Henry Baltazar, research director of storage at 451 Research, said the AWS storage improvements ahead of the company's December Re:Invent conference are an attempt to "make cloud a little bit more like enterprise storage going forward."

For instance, New Replication Time Control for Amazon's Simple Storage Service (S3) and Fast Snapshot Restore for Elastic Block Store (EBS) should help to bring AWS in line with traditional storage systems and modern backup appliances, according to Andrew Smith, a research manager with IDC's enterprise infrastructure practice.

"For single files/object, there may not be much difference in terms of performance of on-prem versus cloud. But as data volumes grow, restoring and replicating become increasingly complex and time-consuming tasks," Smith said.

Amazon S3's new Replication Time Control gives customers a service-level agreement (SLA), metrics to monitor replication time, and event tracking to keep tabs on object replications that deviate from the SLA. The feature is designed to replicate most objects within seconds, 99% within five minutes, and 99.99% within 15 minutes. The associated SLA provides billing credits based on the percentage of objects replicated within 15 minutes. Customers would get a 100% credit if 95% or less are replicated, a 25% credit for 95% to 98%, and a 10% credit for 98.0% to 99.9%. Pricing is $1.5 cents per GB replicated.

Bill Vass, vice president of engineering at AWS, said financial customers that often replicate bi-coastally asked for the guarantee. S3 by default stores a copy of the data in three different locations that are located 10 kilometers to 60 kilometers apart. But, customers in regulated industries can require separation of hundreds or thousands of kilometers, Vass noted.

AWS storage customer requests

Vass said Amazon set a goal of making EBS snapshot restores 10 times faster. Users can enable Fast Snapshot Restore (FSR) on an availability zone (AZ) basis for new and existing snapshots in the event of a volume failure, data corruption issues or other system problems. They would pay 75 cents for each hour that the FSR feature is enabled for a snapshot in a particular AZ. 

EBS provides persistent storage for databases and other applications running on Amazon's Elastic Compute Cloud (EC2). Ordinary EBS snapshots use "lazy loading" to restore data to volumes, and if a volume is accessed where the data is not loaded, the application encounters higher than normal latency, according to an AWS spokesperson. Customers with latency-sensitive applications often pre-warm data from a snapshot into an EBS volume, but the process can be costly and time consuming, the spokesperson said. The new FSR feature removes the need to pre-warm data.

"Block enhancements are important, as AWS has not been successful in moving mission-critical workloads that typically run on block storage into the cloud, and this is a step," said Dave Vellante, chief analyst at Wikibon. "But it still has a ways to go."

Filling AWS storage holes

Vellante said AWS "has had some huge holes in its portfolio" and trails on-premises stalwarts such as NetApp and Dell EMC in hardened storage functionality. "But for cloud, it's best in class," he said.

Newly added data deduplication for Amazon FSx for Windows File Server will give customers access to technology that has long been a staple of enterprise storage and backup products. AWS claims users can expect space savings of about 50% for typical workloads, although actual reduction rates will vary by use case.

"Deduplication is a proven mechanism for optimizing backup performance and lowering costs," said Christophe Bertrand, a senior analyst at Enterprise Strategy Group. "It is typically found on appliances or on-premises environments, and while it can be deployed in a hyper-scale environment, having this native option is a plus for customers."

AWS also added enterprise features to the NTFS/SMB-compatible FSx file system that launched in 2018 for Windows. FSx users will be able to create file systems that span multiple AWS availability zones, so they won't need to set up or manage replication across AZs.

Users with more limited needs could set up a single-AZ file system as small as 32 GiB (34 GB), in comparison to the prior minimum of 300 GiB (322 GB). The updated FSx for Windows File Server also supports continuously available file shares for Microsoft SQL Server.

File storage demand

"The FSx for Windows is interesting [because] there are probably more Windows workloads running on AWS than even in the Microsoft cloud," Vellante said. He said Amazon's file storage improvements are especially important "because that's where most customer data lives."

Vass claimed that AWS has seen triple-digit growth of its file system products, with many customers undertaking lift-and-shift projects. He said about 70% of their file data is cold, so AWS added options to move data to colder storage.

FSx for Windows File Server initially launched on solid-state storage, but Vass said AWS plans to add a cheaper option on hard disk drives for customers that don't require flash performance.

Amazon recently made available a spinning-disk option for its NFS-based Elastic File System (EFS) that launched in June 2016 for EC2 workloads. EFS became available in four new AWS regions this week, with the addition of Europe (Stockholm), the Middle East (Bahrain), South America (Sao Paulo) and Asia (Hong Kong).

One of Amazon's older products, the AWS Storage Gateway, added a high availability (HA) option in VMware environments. Customers can run the AWS gateway software in virtual machines in their own on-premises hardware or buy a pre-loaded Dell gateway appliance to create a local cache for high-speed access to data they have stored, backed up or archived in the cloud. The new VMware HA will enable automatic failover if one of the local hardware devices fails.

"When they first started with the gateway, it was more for secondary type of use cases, like backup and other things that weren't as valuable," said Henry Baltazar, research director of storage at 451 Research. "But now that the on-prem access is more crucial, the availability is important to have."

Amazon is giving AWS Storage Gateway customers control over the scheduling of software updates, rather than having AWS install them automatically. That's another common enterprise capability.

Storage Gateway users who enable Amazon CloudWatch integration will be able to monitor cache utilization, access patterns, throughput and I/O metrics. AWS also boosted read performance when the product is used as file gateway or a Virtual Tape Library.

AWS also lowered the price for the DataSync service that launched in 2018 for high-volume data transfers will pay lower prices. AWS reduced the per-GB price for DataSync from 4 cents to 1.25 cents.

"Announcements like this are going to continue to ramp up the pressure for everybody to add more capabilities," Baltazar said. "And I think that's how the market's going to continue evolve."

Next Steps

Explore top AWS storage types for file, block, object

Dig Deeper on Cloud storage

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close