Replication for high availability

When it comes to replication, there are solutions available for every taste and budget.

What you will learn from this tip: How to design a replication system that meets your organization's performance needs -- and doesn't break the bank.


Replication for high availability can be simple and cheap or it can be elaborate and expensive. It all depends on what you want to replicate -- and what you consider to be acceptable performance.

When it comes to replication, there are solutions available for every taste and budget. They range from near real-time mirroring to a local server (or a server at a remote location), to host-based replication software that automatically switches to replicated data when it detects a problem, to simple RAID-1 mirroring. Needless to say, the prices run the gamut as well.

But how high is high availability? This is sort of like asking how high is up. The first thing you need to do is to establish measurable criteria for your replication system. Then you need to consider the possible solutions in terms of those criteria.

For high-availability replication, those criteria usually boil down to fault tolerance and speed. Ask yourself these questions:

  • How fault tolerant does your system need to be? Is it enough if your system can survive an array or server failure, or do you need a system that will keep your data available if the next hurricane levels your data center?
  • How quickly do you need data back? Although replication implies a copy of your data, it can take time to access it. For example, it takes time to remount and bring back up a DBMS even if all the data is completely replicated.
  • How close to the failure point does your recovery point need to be? A replication system that can roll back to the state it was in at midnight is a lot cheaper than one that rolls back to an hour ago, and one that rolls back an hour is much cheaper than a system with a recovery point measured in seconds.
Obviously, these kinds of decisions involve more than the storage administrator. It's important that everyone deciding the rollback and degree of fault tolerance understands the cost effects of those parameters.

To take one example, the fastest replication solutions usually require identical hardware. This is not only expensive, but keeping the hardware identical poses a maintenance problem.

Remote replication imposes speed (and cost) penalties because of the limited link bandwidth, but it can keep you going when the next big one wipes out your whole data center.

Finally, you need to decide what to replicate. You can replicate everything, of course, but most of the time you don't need to. Even if you do need to replicate everything, you almost certainly don't need everything back in the same time frame. That offers opportunities for cost savings.

For more information:

Tip: Remote replication gets out of the array

Tip: The evolving role of data replication

Tip: Distance benchmarks for data replication


About the author: Rick Cook has been writing about mass storage since the days when the term meant an 80 K floppy disk. The computers he learned on used ferrite cores and magnetic drums. For the last 20 years, he has been a freelance writer specializing in storage and other computer issues.

This was first published in March 2005

Dig deeper on Primary storage capacity optimization

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close