This article can also be found in the Premium Editorial Download "Storage magazine: Using file virtualization to improve network-attached storage."

Download it now to read this article plus other related content.

Toward a layered data protection model
When we plan data protection services, it's generally agreed that the critical metrics are recovery time objective (RTO) and recovery point objective (RPO). But given the various failure conditions described here, it's highly improbable that a given RTO or RPO target could be successfully met across all failure scenarios.

For example, consider the typical BCV/replication/backup protection combo and assume a four-hour RTO and RPO. There are scenarios, such as latent data corruption, where backup is the only protection available and the four-hour recovery metrics would be completely blown away. Is there any way to mitigate this?

The answer is "Maybe." For instance, it's not uncommon for a diligent, risk-averse database administrator to perform database dumps to disk, and have days or even weeks of additional copies stowed away. If these were available, the recovery time could be considerably closer to the RTO target vs. restoring from backup. In any case, RPO would likely be far exceeded.

When we plan for DR, we must take into account the types of risks and their probabilities. A similar approach should be considered when planning an overall data protection strategy. I'm suggesting that a layered services approach to data protection is needed in today's environments. Such an approach should identify the following for each layer:

  • The risk to

Requires Free Membership to View

  • be mitigated
  • The targeted level of protection to be provided based on probability and business impact
  • The protection method required to address this need
You may decide that certain risks are too improbable or costly to protect against. At a pragmatic level, this could mean deciding that any database copies more than a week old are effectively considered of no value because a latent corruption would be uncovered before then. This would allow a retention policy change and potentially free resources. On the other hand, you may find that critical business functions are exposed to unanticipated risks and need to be addressed.

Change is always difficult, but history is littered with examples of those who were unable to adapt. This approach may raise concerns about exposing weaknesses within IT's capabilities. If you fear such transparency, then there may be risks associated with this approach. For environments that believe they can recover instantaneously from any kind of data loss, the truth may be unpalatable. However, if an organization is sincere about addressing service-level objectives, this process can uncover holes in existing data protection strategies and ensure that when a major new technology direction is undertaken, it will be the right one.

This was first published in March 2007

There are Comments. Add yours.

TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: