Introducing disk-to-disk-to-disaster recovery
Let's simplify how we think about storage and the data we save.
There are two types of storage/data: primary, which is created and actively used; and protected, which is a copy or migration of primary data placed into the "data protection continuum." Concepts such as information lifecycle management haven't had much uptake yet. The concept is sound, but there's too much perceived and actual risk when we do things with it on the primary side of the equation. However, it makes sense to talk about it on the protected side. And that's where Disk-to-Disk-to-Disaster Recovery (3DR) enters the picture.
|Read what all of our expert bloggers have to say on data protection, storage networking and more. Click here.||
When you think about your protected data, 3DR is the construct to consider. Forget all of your assumptions and what you do today. It doesn't work anyway, so bear with me for a moment.
First answer these simple questions: How many identical copies of the same data set do you need for the worst-case scenario, i.e., losing your primary and secondary sites? I'd say you need four copies on four different tapes in four different locations.
Do you believe that restoring at the local site off a disk-based system is better or worse than chewing tinfoil while rubbing a cheese grater on your scalp?
If you agree with my answer to the first question -- or at least didn't say that you enjoy having 11 million copies of the same data on 17 million tapes -- and you find local recovery from disk to be a better option, then 3DR is for you.
Data on primary (I don't care what primary disk it's from; primary is primary and includes expensive, fast, slow, cheap, etc.) is typically ingested into the protection continuum via the same process you have today--backup. Your Tier-1 protection system, which you back up to, is disk-based and cheap block disk, a file server or a virtual tape library (VTL). Until the backup guys change things, I prefer the VTL route. The objective of Tier-1 is to keep everything forever
. Technologies such as data de-duplication therefore become critical. Eliminating all of the duplicate data the backup process creates, let alone all the duplicates you have on primary disk, means that for your 100 terabyte (TB) of primary, you probably need only 5 TB to 20 TB of actual capacity protected on Tier-1 to accommodate it. Add normal "new" data additions and, with headroom, a 1-to-1 ratio of primary to Tier-1 protected storage is about right if deduplication is involved. This stuff has to be cheap so you can keep all of your backup data (unique) there forever, effectively eliminating any recovery outside the local base unless all hell breaks loose.
Tier-1 will replicate itself to Tier-2, which is an exact copy of Tier-1, only cheaper (because we'll rarely access it, it can be the slowest disk out there). This is feasible because the dedupe technology will send only unique data over the wire. It can be asynchronous replication, so who cares how big the pipe is? In addition, we can have our Tier-1 systems replicate to one big Tier-2; therefore, dedupe should also occur at Tier-2 because unique data from one site may not be unique vs. another site. This is our disk-based DR tier. From Tier-2, you spin to tapes based on the policy you set. However, the system or user needs to be smart enough to know when four copies of the same data set have been captured on four tapes and shipped to four places -- preferably four different Iron Mountain [Inc.] facilities. Then you never have to back up that data set to tape.
What does it all mean? It means that 99.8% of all of your recoveries will occur on local Tier-1 protected disk. You can also have a legitimate DR site where you actually recover things. It means you can probably eliminate 95% of all the tape media you normally buy each year, which will most likely pay for everything you'll need to pull this off.
The problem, my friends, is the way we think. "That's the way we do it" isn't a good answer anymore. The technology exists. It's our thought processes that need to be fixed.
This column by Mr. Duplessie
first appeared in Storage
magazine's June 2006 issue.
About the author:
Steve Duplessie is the founder and senior analyst for the Enterprise Strategy Group
in Milford, Mass.