Many of the problems confronting disaster recovery planning efforts occur both in government and commercial settings. Two of the more frequently cited are the distraction (and sometimes outright denial) of senior management when it comes to funding disaster recovery plans, and the confusion surrounding strategies for data protection.
Statistics underscore the first problem. Even in companies that had disaster recovery plans in place before or immediately after Sept. 11, there seems to be a steady reduction in the interest in keeping capabilities up-to-date with business change over time. An Ernst & Young survey of 459 companies conducted in Nov. 2001 showed that only 245 actually had some provisions for business continuity, and of that number, only 120 had ever conducted a test to ensure that their plans were viable. By September 2002, multiple polls confirmed that the situation in Corporate America had actually worsened: Fewer than half of the companies surveyed had DR plans, an even smaller percentage of those with plans had tested them, and nearly everyone cited budgetary cutbacks as the culprit in their apparent "backsliding from the faith."
In spite of this worsening trend, data protection is increasingly regarded as the most critical component of a recovery plan. Second only to trained personnel, data is the most irreplaceable asset of any organization. Disaster is increasingly defined as an unplanned interruption in access to data, whether caused by corruption
Even with the recognition of the primacy of data in effective recovery, planners still run afoul of the second issue: confusion over what constitutes the best means to protect data.
Developing a viable data protection strategy continues to be hampered by three obstacles. First, the data generated by systems is not self-describing, which has made the identification of what data to protect a daunting task in itself. Many disaster recovery planning efforts are stymied early on by the need to conduct an exhaustive analysis of what data is being stored and what business processes it supports as a precursor to sizing an appropriate data protection solution.
Second, the storage industry has, for many years, fostered a "polarized view" of data protection strategies, encouraging consumers to see their options narrowly as a choice between either tape backup or disk mirroring. The tape option has the strengths of being less expensive because of lesser media cost for tape and more robust because it provides flexibility in data restoration to consolidated platforms. The disk mirroring option has the advantage of providing quick data access restoration following an unplanned interruption (albeit, at much higher cost than tape). Thus, it tends to be the darling of the financial industry as many post-Sept.-11 recoveries leveraging "mirrors across the Hudson River" demonstrated.
The third obstacle to data protection planning has been, quite simply, the different meanings and interpretations assigned by different vendors to the terminology and language of DR. Within the world of data copy (the essential function provided both by tape backup and disk mirroring), numerous methodologies exist –- ranging from "full volume" copies to "bare metal images" to "snapshots" to "incremental copies" to "journals." The jargon has expanded to include such terms as "mirror images," "ghost volumes," "LAN-free backups," "server-free backups," "pointer backups," and symmetrical and asymmetrical mirrors. Differences in the interpretations and definitions that vendors assign to these terms have many planners reporting that it is increasingly difficult to make apples-to-apples comparisons of different products and strategies.
Compounding these issues is the impact of rapid technology change and the growing inefficacy of strategies for data protection that are "bolted on" after applications and systems have been built. For example, to achieve greater operational and management efficiency, many companies are opting to virtualize their storage infrastructure by deploying software engines to create logical volumes from aggregations of physical storage devices or logical unit numbers (LUNs) describing disk array partitions. The problem that can arise without careful testing is a conflict between the operation of virtualization approaches and backup/restore software. A "write penalty" can accrue that will slow data restoral from tape to a virtual volume to a downright crawl.
Going forward, planning and provisioning for the protection and recovery of data is an activity that needs to be undertaken as part of normal application and systems development. Rapid recovery in complex, heterogeneous IT environments can be facilitated, for example, by designing applications and data for portability at the outset to other "hosting" platforms through the considered selection of data formats, file systems, middleware and other design components.
Potentially one of the most significant events since Sept. 11 for disaster recovery is an evolving consortium of vendors and end users dedicated to developing reference solutions for data recovery and providing an end-user forum for discussing the efficacy of (and deficits of) each solution set. Called the Enhanced Backup Solutions Initiative (EBSI), the burgeoning organization was spearheaded by Quantum Corporation, Legato Systems and a number of other large and small players in the backup and mirroring solutions market last spring. The vendor ranks are growing, but the ultimate fulfillment of the organization's mission will only be possible if end users become involved. See www.enhancedbackup.com for more information.
About the author: Jon William Toigo has authored hundreds of articles on storage and technology and authors the monthly SearchStorage.com "Toigo's Take on Storage" expert column. He is also a frequent site contributor on the subjects of storage management, disaster recovery and enterprise storage. Toigo has authored a number of storage books, including Disaster recovery planning: Preparing for the unthinkable, 3/e.
This was first published in October 2002