How much should we change our existing (time proven but perhaps dated) technologies and processes to handle a wide range of high availability and disaster/recovery requirements?
For example, our customers' disaster/recovery needs are currently no tighter than a 24-hour RPO and 72-hour RTO. (Most applications will require 14 day, 30 day or longer RTOs). We could theoretically meet these needs via tape backup and recovery processes. Is this the most effective and cost effective way to do it?
However, in my experience, backup and restore is not going to meet most critical applications' availability targets. Have you ever run a DR drill to see how long it would actually take to get your data and applications ready to go at the new site?
Restores take longer than backups and most enterprises have fewer tape drives at their DR sites, which mean that restores will take even longer. A big lesson learned in NYC after 9/11 was that most companies did not have enough tape drives to complete their restores in a timely manner.
Remember, too, that it's more than just downtime. Most likely, your disaster will occur sometime after backups have completed. The result is sure to be missing data. Data you thought had been committed is, in fact, lost because it was never backed up. How much data can your enterprise afford to lose?
Some organizations replicate data from their most critical servers but continue to use backups for the rest.
I cannot tell you how long it will take to restore your critical systems. I can tell you that unless you've tested things, it will take longer than you think and, probably even if you HAVE tested things.
Evan L. Marcus
Editor's note: Do you agree with this expert's response? If you have more to share, post it in our .HcX6azlxeJg^0@.ee83ce2!viewtype=&skip=&expand=>Administrator Central discussion forum.
This was first published in November 2002