This article can also be found in the Premium Editorial Download "Storage magazine: Comparing EMC Symmetrix DMX-3 vs. Hitachi Data Systems USP1100."
Download it now to read this article plus other related content.
If you fail to plan, you plan to fail
A DR plan represents an organization's detailed roadmap of where to go, what to do and when to do it in the event of a disaster. It should incorporate actions that need to be performed before, during and after a disaster is declared. Among the more basic elements are defining the criteria under which a disaster is declared, who can declare it and how individuals are notified. The Gulf hurricane experiences reinforced the challenge and importance of communications, and a good plan should include contingencies; you can't assume e-mail, VoIP or even cell phone service is available.
We know that processes and procedures need to be documented, but we also know that most people hate to do it. Even the most carefully crafted DR plans will become useless without proper attention. DR needs to be baked into the standard change management process so that whenever systems are modified, software is patched or additional storage is assigned, the impact on DR is reviewed and the plan revised accordingly. Likewise, when reorganizations occur, the DR plan must be revisited.
It's clear that double-digit data growth rates dramatically impact the ability to recover within targeted time constraints, but application complexity and interdependence is an often-overlooked factor that has a major impact on recoverability. Today, major applications are spread across multiple servers and architectures. It's not uncommon for a mainframe application
This situation can be avoided by first understanding the interdependencies among applications and then applying the appropriate data protection approach. The method could be the use of split mirror/replication technology featuring consistency groups that encompass the interdependent elements, or it might be continuous data protection (CDP) technology that can ensure highly granular, synchronized time-based rollback.
This was first published in January 2007