At this spring's Storage Networking World conference, one of the more popular questions asked was "How can I protect...
the data in my remote offices?"
Two things could be agreed upon during such discussions:
Whole-file backup is not viable across most WAN infrastructures. Even if a user changes only a small part of a file, the entire file is considered new and therefore must be written to tape. Even incremental or differential backups are cost prohibitive across a WAN since they deal in whole files.
Because remote offices typically do not necessitate local IT personnel, all tape backup maintenance (including tape rotation and cleaning cartridges) must be done by administrative or non-technical personnel. This is ironic since we rely on the tapes for any recovery efforts, but we are trusting non-IT to manage the tapes (the human factor is the greatest stumbling point in the solution).
The solution is to stop dealing with whole files from the remote offices.
Since replication technology propagates only the bytes that have changed, one can do an initial mirror (think full backup) and then keep the two copies in sync via replication. Now, in your data center, you have a local copy of remote data. So, do a local backup.
Since users aren't actively accessing the second copy of the files, they are natively closed -- regardless of whether the production files are user documents or SQL/Exchange databases. And you can do it without backup agents.
Two last gotchas to consider:
How to do restores? Most backup applications can do a redirected restore. Even if you backed up the files from FS-TARGET, you can restore them to FS1.
How about the registry and other bare-metal information? Simply configure the Win2000/2003 server to do a routine dump of its system state (using the built-in backup utility) to a directory that is protected by the replication software -- so that it gets propagated to the data center. Now, if you need to rebuild a server from scratch, you can install a clean O/S, restore the system state and replicate the data and ship the unit. But instead of the server being from last night's backup, the data is minutes older than when the outage occurs.
Bottom line -- more and more critical data (and that which will be most time/labor consuming to replace) exists outside of the corporate data center, so it needs to be protected.
About the Author: Jason Buffington has been working in the networking industry since 1989, with a majority of that time being focused on data protection. He is a Certified Business Continuity Planner and a Microsoft MCT/MCSE. Jason currently serves as the Director of Business Continuity for NSI Software, enabling high availability and disaster recovery via replication software. He can be reached at email@example.com.
This Content Component encountered an error