Finally time to declare full backups dead


This article can also be found in the Premium Editorial Download "Storage magazine: Solid-state adds VROOM to virtual desktops."

Download it now to read this article plus other related content.

We argued in 2006 that after a volume’s base image was copied changes to data should be captured only once at the time of creation. Because each change was time-stamped, the recovery system should be able to build the contents of the volume from any point in time (APIT). Using this methodology, we would never need a backup window. No fulls and incrementals, and the recovery point objective (RPO) would be whatever we wanted it to be, even zero. The recovery time objective (RTO) would be very fast, too, since the volume image could be grabbed from any point in time. Companies like Mendocino and Revivio promoted this method, but failed. Still, we felt the fundamentals were right and perhaps the concept was ahead of the available technology.

In parallel, other developments were poised to impact data protection in a big way. Vendors like Data Domain (now EMC), ExaGrid, FalconStor, Quantum and Sepaton said that rather than storing multiple copies of data on slow, unreliable tape, we should toss out all that duplicate data and store it only once on inexpensive SATA disks. Files were split into chunks and only one copy of each chunk was kept on disk. When data was replicated, these new systems only sent unique chunks across the wide-area network (WAN) and thereby maintained a capacity-efficient environment on the remote site as well. Good sound thinking, we said. And surely IT responded well, as demonstrated by the success of many of these companies and a drastic drop

Requires Free Membership to View

in tape sales over the past four years.

But the fundamental process of data protection still hadn’t changed. We still ran fulls and incrementals, and we maintained a remote location. And, typical of conservative storage professionals, we often still maintained tape behind disk. So our Iron Mountain expenses stayed with us. But we felt better because backups were faster and more reliable, as were recoveries.

In the past few years, we’ve seen a resurgence of the continuous data technologies (CDT) idea. And this time, vendors have developed products that work. Finally, we think the idea of CDT will get a fair shake and a shot at commercial success. So why would these new products be successful now when they weren’t in 2006? Two things are different today. On the conceptual front, we all recognized that just because we could create the image of a volume as of any point in time, it didn’t mean we should. APIT images may take you to an RPO of zero, but an image that’s inconsistent with the state of the application isn’t very useful. Your RPO for data may be zero, but your RPO of the application could be hours or days. Instead, the more meaningful point in time for recovery is the last consistent state. To make this concept work one needed the ability to generate very rapid snapshots. And for mission-critical applications that often ran on multiple systems and had multiple databases, the system needed to be quiesced across the board for a consistent snapshot to be taken. This level of sophistication wasn’t available in 2006, but it’s now commonplace.

This was first published in July 2012

There are Comments. Add yours.

TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: