Feature

Distance: Meeting the new mandate for disaster recovery

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Distance: the new mantra for disaster recovery."

Download it now to read this article plus other related content.

Case Study: Mixing it up with remote backup
Initially, Charlie Roberts had a modest goal for improving

    Requires Free Membership to View

his organization's disaster recovery capability: He wanted to replace a patchwork quilt of third-party backup applications with a single solution in the same location.

"We were just looking for a more elegant backup solution than the one we had," says Roberts, VP of IT at the Travis Credit Union in Vacaville, CA. "The applications were all different and they required a lot of human intervention."

Since then, Roberts has streamlined things at his 51-year-old institution, which serves 113,000 members in nine Northern California counties. His team is gradually weeding out those old labor-intensive programs, replacing them with an online solution that automatically seeks, copies and stores new data daily.

The biggest change: The new system no longer nestles next to the core systems it's designed to protect.

For now, Travis--with $1.2 billion in assets ranking among the nation's largest credit unions--relies on a remote backup site just across town. But that's just the first step. Later this year, the institution--located near Travis Air Force Base--will begin backing up all of its data to a permanent disaster-recovery center in Merced, CA, more than 150 miles away.

Based in earthquake country north of San Francisco, Roberts had been investigating ways to better protect his organization's critical financial data even before Sept. 11. After the terrorist attack, he stepped up his quest. "It showed everyone how potentially vulnerable we are," Roberts says. "If I had a catastrophic failure, I wasn't necessarily convinced that I could restore everything." In addition, the old setup made it tough to find and retrieve individual records from the hundreds of gigabytes of stored information.

Today, Roberts has what he describes as "a mixed storage environment." He uses Adaptec 160 SCSI adapters to connect to six RAID arrays from Nexsan Technologies Inc. of Woodland Hills, CA. The setup includes two Nexsan ATAboy and two Nexsan ATAbaby devices onsite and two more ATAboys offsite. He still backs up transactional data on tape cartridges and less critical information--such as employee files--gets copied to local servers, magnetic media or CD-ROMs at both locations. But he primarily relies on the EVault Inc. InfoStage online backup and recovery suite and an existing T1 network for taking snapshots of his networks several times daily.

"The copies we're making are good, clean and robust, and we don't need to physically send anything" to the remote facility, Roberts says. "We just do an electronic copy and send it online." He can also restore specific pieces of information quickly right from the network--a capability he used even before the system went live.

"While we were testing, our executive VP couldn't find an e-mail she needed," Roberts recalls. "We had just done the seed load--the initial snapshot of the system--and it turned out the message she needed was in that load." He retrieved the message instantly, solidifying at least one executive's support for the effort.

The switch hasn't been painless. "Initially, we weren't able to restore as quickly and painlessly as we'd seen in the demos," Roberts says. But Walnut Creek, CA-based EVault has consistently responded promptly, Roberts says, once sending two representatives to help speed things up.

The new system has freed up one team member previously dedicated to manual backups nearly full time. Beyond that, Roberts declines to discuss the system's cost or its potential ROI. "I put it in the category of the cost of doing business," he says. "You don't know how much you need it until you need it. The first time I have to use it will pay for itself many, many times over. When you don't have to tell customers 'I don't have your transactional data, I don't know how much you have in your account'--how do you put a price tag on that?" --Anne Stuart
Think your backup system is adequate? Think again. Even if it works for now, it might need some serious revamping in the not-too-distant future. Figure on longer distances--much longer.

As the Sept. 11 terrorist attacks on the United States proved, it's no longer good enough to stash backup tapes on a different floor from the data center, or even down the block. And in some cases, the government is mandating tougher disaster recover procedures for certain industries. For example, The Health Insurance Portability and Accountability Act (HIPAA), as part of establishing standard data formats and content for all health insurance providers, requires safeguards including technology-based contingency planning and disaster recovery to ensure the safety of patients' records. These safeguards include "periodic tape backups of data" and the ability to continue operations "in case of an emergency," according to HIPAA rules.

All told, "companies are thinking more regionally now," says Dianne McAdam, a senior analyst at Data Mobility Group in Nashua, NH. "We have more reasons to replicate data over longer distances," she says.

There are already numerous ways to send data afar. Methods include synchronous replication products available for quite some time, primarily from the storage hardware vendors--IBM's Peer-to-Peer Remote Copy (PPRC) and EMC's Symmetrix Remote Data Facility (SRDF) fall into this camp. But there are distance limitations, or else the products can replicate only onto like hardware boxes. They typically don't work in a multivendor storage environment.

More important, the traditional synchronous mode of replication won't work well for really long distances--anything over 12 miles or so. In synchronous mode, system A sends data to system B, and then waits for a response that system B received the information before sending any more. This usually means a delay of 1 ms per 25 circuit miles. (Keep in mind that a circuit mile isn't the same thing as a regular mile. A phone call that travels between New York and Boston might be only 200 distance miles apart, but that call can pass through 400 miles of circuits or more. The 1 ms delay applies to circuit miles.)

That leaves asynchronous replication--where the first system doesn't worry about a response from the second. This avoids the performance delay, although it doesn't address network latency. Still, it leads to problems of its own. First, if there's a failure on either side, it's not clear how far behind the backup volume is from the primary copy. It's certain that they're not exact duplicates of each other, and it will take some human time and effort to sort out what's missing and how to recapture it.

Second, some backup systems batch I/Os together instead of shipping them in the exact order they came in. So, the backup copy might not be usable because the data is out of order.

One new product that attempts to solve these problems is SANSafe from Topio Inc., Santa Clara, CA. It essentially timestamps each piece of data before it's sent, then sends the I/O to the remote site via an IP network and then sorts each transaction in order. The remote volumes are updated on a periodic basis, so there's a copy that consistently matches the local volume. Topio promises that, despite its name, SANSafe works with direct-attached storage (DAS) as well as data residing in SANs. Topio also claims that SANSafe can work with multiple hosts sending to multiple remote locations.

Backup over IP
Backup over IP seems to be one popular way of approaching disaster recovery these days. Products are popping up all over in this space, ranging from supporting iSCSI to sending data to tape devices via IP. Mike Karp, a senior analyst with Enterprise Management Associates in Boulder, CO, is a big fan of iSCSI in particular (see "Getting real about iSCSI").

"It's known to work, it's relatively easy to implement and it's a proven technology because it combines things we've been doing for 15 years," he says. He maintains there are no more inherent security risks by moving data over long distances than there was moving data between two systems located a half-mile apart.

"There is nothing in iSCSI that makes it more vulnerable as a system than any other technology," Karp says.

One notion that's seeing quite a bit of action these days is providing central access to distributed data--in other words, recentralizing all corporate assets into a data center and then backing it up to tape or by some other means. Distributed users are provided access to the centralized information via IP or other type of WAN.

Here's how this works. In each remote office, companies place a low- or no-maintenance caching appliance. A full-fledged server goes into the data center. The appliance and bundled software scan for the portion(s) of any file that has changed since it was last written or requested. Once everything is centralized in the data center, it can be backed up to tape or to a secondary data center or storage vaulting provider.

This was first published in May 2003

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: