Tip

How to determine acceptable data loss

What you will learn from this tip: How to create a good disaster recovery plan, taking into account your recovery point objective, recovery point time and the distance between your primary storage site and the recovery site.


When creating a disaster recovery plan, the first thing you need to do is to determine your recovery point objective (RPO), either for the entire environment, or for critical applications and data. By this I mean, how much data you can tolerate losing and not having access to? If you can tolerate loss of 10 minutes of data, then your RPO is 10 minutes. If you can not tolerate any loss of data, then your RPO is zero. So, for example, with an RPO of 20 minutes and using asynchronous

Requires Free Membership to View

mirroring, your data should be at least within 20 minutes of being intact.

Second, you need to figure out your recovery time objective (RTO), which means how quickly you need to have access to your data. Note that your RTO and RPO do not have to be the same and that they may differ by application.

Third, ask yourself how far apart your data centers are going to be and what type of fiber optic service will you be using. By type of fiber optic service, I mean is it going to be a dedicated fiber optic cable that you can attach CWDM (Coarse Wave Division Multiplexing), WDM (Wave Division Multiplexing), or DWDM (Dense Wave Division Multiplexing) equipment and self-provision? Will it be a lambda bandwidth service offering from a carrier or other provider that you can allocate to Fibre Channel or Ethernet? Will it be an SONET/SDH OC-x or IP service for shared bandwidth?

I'm a big fan of implementing a multi-tier mirroring strategy in which you use synchronous mirroring for real-time data protection between sites that could be up to 100 km apart (further using specialized equipment, tolerance to latency and getting your vendors to all support it) with minimal performance impact. In the second part of a multi-tier mirroring strategy, you asynchronously mirror data using delta copies to send data hundreds to thousands of kilometers to another site. Keep in mind that when dealing with storage over distance that while bandwidth is important, low latency is essential for data protection. You can also utilize compression and data optimization techniques available from various vendors as standalone products or part of SAN distance extension products like those from Brocade, Ciena, Cisco, CNT, EMC, McData, Netex and Nortel, among others, to help lower your bandwidth costs and improve latency.

The actual data replication and mirroring can be done using host software available from different vendors for both open systems and mainframe environments. The mirroring and replication can also be done from a storage subsystem as well as from new and emerging appliance devices that sit in the data path between servers and storage devices. These devices are also referred to as virtualization appliances, intelligent switches, and many other creative marketing names to differentiate them from their counterparts. Where to do the replication is a personnel preference. The different vendors are more than willing to tell you the pros and cons of each approach. Both server-based and storage subsystem-based approaches are proven and practical each with their own caveats. On paper, using a third party in the data path copy mechanism is interesting, and there are plenty of vendor success stories; however, this is an area that is still emerging.

The bottom line is that the best approach is the one that meets your needs and requirements including speed of data movement, maximizes data bandwidth, multiple platform (server and storage) if applicable, and ease of management, among other criteria.

One last thing to keep in mind that is often over looked is the importance of data consistency groups to insure that all data is consistent when it is mirrored across storage devices and locations.

For more information:

Tip: Lack of DR could be hazardous to your company's health

Tip: Cut costs with tiered backups

Tip: Planning for everyday disasters


About the author: Greg Schulz is a senior analyst with the independent storage analysis firm, The Evaluator Group Inc. Greg has 25 years of IT experience as a consultant, end user, storage and storage networking vendor, and industry analyst. Greg has worked with Unix, Windows, IBM Mainframe, OpenVMS and other hardware/software environments. In addition to being an analyst, Greg is also the author and illustrator of "Resilient Storage Networks", Greg has contributed material to Storage Magazine. Greg holds both a computer science and software engineering degree from the University of St. Thomas.

This was first published in August 2004

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

Disclaimer: Our Tips Exchange is a forum for you to share technical advice and expertise with your peers and to learn from other enterprise IT professionals. TechTarget provides the infrastructure to facilitate this sharing of information. However, we cannot guarantee the accuracy or validity of the material submitted. You agree that your use of the Ask The Expert services and your reliance on any questions, answers, information or other materials received through this Web site is at your own risk.