This article can also be found in the Premium Editorial Download "Storage magazine: The benefits of virtual disaster recovery."
Download it now to read this article plus other related content.
Storage and server virtualization make many of the most onerous disaster recovery (DR) tasks relatively easy to execute, while helping to cut overall DR costs.
If your company still lacks a viable disaster recovery (DR) strategy, it might be time to start thinking virtualization. The initial drivers behind server virtualization adoption have been improving resource utilization and lowering costs through consolidation, but next-wave adopters have realized that virtualization can also improve availability.
Virtualization turns physical devices into sets of resource pools that are independent of the physical asset they run on. With server virtualization, decoupling operating systems, applications and data from specific physical assets eliminates the economic and operational issues of infrastructure silos -- one of the key ingredients to affordable disaster recovery.
Storage virtualization takes those very same benefits and extends them from servers to the underlying storage domain, bringing IT organizations one step closer to the ideal of a virtualized IT infrastructure. By harnessing the power of virtualization, at both the server and storage level, IT organizations can become more agile in disaster recovery.
Reduce the risk
Improving disaster recovery and business continuity are perennial top-10 IT priorities because companies want to reduce the risk of losing access to systems and data. While
The goal of a DR process is to recreate all necessary systems at a second location as quickly and reliably as possible. Unfortunately, for many firms, DR strategies are often cobbled together because there's nothing or no one mandating them, they're too costly or complex, or there's a false belief that existing backup processes are adequate for disaster recovery.
Backup technologies and processes will take you just so far when it comes to a disaster. Tier 1 data (the most critical stuff) makes up approximately 50% of an organization's total primary data. When the Enterprise Strategy Group (ESG) surveyed IT professionals responsible for data protection, 53% said their organization could tolerate one hour or less of downtime before their business suffered revenue loss or some other type of adverse business impact; nearly three-quarters (74%) fell into the less-than-three-hour range. (The results of this survey were published in the ESG research report, 2010 Data Protection Trends, April 2010.) Under the best conditions, the time it takes to acquire replacement hardware, re-install operating systems and applications, and recover data -- even from a disk-based copy -- will likely exceed a recovery time objective (RTO) of one to three hours.
Recovery from a mirror copy of a system is faster than recovering with traditional backup methods, but it's also more expensive and complex. Maintaining identical systems in two locations and synchronizing configuration settings and data copies can be a challenge. This often forces companies to prioritize or "triage" their data, providing greater protection to some tiers than others. ESG research found that tier 2 data comprises 28% of all primary data, and nearly half (47%) of IT organizations we surveyed noted three hours or less of downtime tolerance for tier 2 data. Therefore, if costs force a company to apply a different strategy or a no-protection strategy for "critical" (tier 1) vs. "important" (tier 2), some risks may be introduced.
This was first published in April 2011