This article can also be found in the Premium Editorial Download "Storage magazine: Why automated storage tiering is on the rise."
Download it now to read this article plus other related content.
Devising a disaster recovery blueprint
Just as with traditional DR, there isn’t a single blueprint for cloud-based disaster recovery. Every company is unique in the applications it runs, and the relevance of the applications to its business and the industry it’s in. Therefore, a cloud disaster recovery plan (aka cloud DR blueprint) is very specific and distinctive for each organization.
Triage is the overarching principle used to derive traditional as well as cloud-based DR plans. The process of devising a DR plan starts with identifying and prioritizing applications, services and data, and determining for each one the amount of downtime that’s acceptable before there’s a significant business impact. Priority and required recovery time objectives (RTOs) will then determine the disaster recovery approach.
Identifying critical resources and recovery methods is the most relevant aspect during this process, since you need to ensure that all critical apps and data are included in your blueprint. By the same token, to control costs and to ensure speedy and focused recovery when the plan needs to be executed, you want to make sure to leave out irrelevant applications and data. The more focused a DR plan is, the more likely you’ll be able to test it periodically and execute it within the defined objectives.
With applications identified and prioritized, and RTOs defined, you can then determine the best and most cost-effective methods
Cloud-based disaster recovery options
Managed applications and managed DR. An increasingly popular option is to put both primary production and disaster recovery instances into the cloud and have both handled by an MSP. By doing this you’re reaping all the benefits of cloud computing, from usage-based cost to eliminating on-premises infrastructure. Instead of doing it yourself, you’re deferring DR to the cloud or managed service provider. The choice of service provider and the process of negotiating appropriate service-level agreements (SLAs) are of utmost importance. By handing over control to the service provider, you need to be absolutely certain it’s able to deliver uninterrupted service within the defined SLAs for both primary and DR instances. “The relevance of service-level agreements with a cloud provider cannot be overstated; with SLAs you’re negotiating access to your applications,” said Greg Schulz, founder and senior analyst at Stillwater, Minn.-based StorageIO Group.
A pure cloud play is becoming increasingly popular for email and some other business applications, such as customer relationship management (CRM), where Salesforce.com has been a pioneer and is now leading the cloud-based CRM market.
Back up to and restore from the cloud. Applications and data remain on-premises in this approach, with data being backed up into the cloud and restored onto on-premises hardware when a disaster occurs. In other words, the backup in the cloud becomes a substitute for tape-based off-site backups.
When contemplating cloud-based backup and restore, it’s crucial to clearly understand both the backup and the more problematic restore aspects. Backing up into the cloud is relatively straightforward, and backup application vendors have been extending their backup suites with options to directly back up to popular cloud service providers such as AT&T, Amazon, Microsoft Corp., Nirvanix Inc. and Rackspace. “Our cloud connector moves data deduped, compressed and encrypted into the cloud, and allows setting retention times of data in the cloud,” said David Ngo, director of engineering alliances at CommVault Systems Inc., who aptly summarized features you should look for in products that move data into the cloud. Likewise, cloud gateways such as the Cirtas Bluejet Cloud Storage Controller, F5 ARX Cloud Extender, Nasuni Filer, Riverbed Whitewater and TwinStrata CloudArray, can be used to move data into the cloud. They straddle on-premises and cloud storage, and keep both on-premises data and data in the cloud in sync.
This was first published in May 2011