Users divided on approach to disaster recovery

Two companies' DR plans both leave out replication tools from EMC and Veritas, but that's where the similarities end. Where they diverge is in the hardware vs. software approach.

Two companies, one a law firm and the other a publisher of scientific journals, share the same concerns about ensuring disaster recovery, but have chosen different approaches to replicating their data remotely.

Thacher, Proffitt and Wood LLP, an international financial law firm headquartered in New York City, has been using Double-Take from NSI Software Inc. for the past two years to replicate crucial applications to a data center in New Jersey.

On the other hand, The American Institute of Physics (AIP), a not-for-profit publisher of scientific journals in Melville, New York, saw a strictly software approach as too labor-intensive, and recently opted for appliance-based data replication with the KBX5000 from Kashya Inc., San Jose, Calif.

Both companies passed on data replication products from such major vendors as Veritas Software Corp. and EMC Corp. before choosing lesser-known suppliers.

Related content

Synchronous replication distance limitations?

Distance benchmarks for data replication

Is replication worth it?

EMC refreshes remote replication

Like many businesses, Thacher Profitt had a limited disaster recovery plan before Sept. 11, 2001. After the terrorist attacks, the firm was forced to relocate from its World Trade Center office, rebuild its IT team and set up a disaster recovery site in New Jersey.

The firm, an EMC shop, looked at EMC's replication software, MirrorView, to move data from source servers in New York to target servers in New Jersey. But MirrorView could only do synchronous replication to identical hardware on both sides and the firm's T3 line did not have enough bandwidth. According to EMC's Web site, MirrorView will support asynchronous replication by year-end.

Thacher Profitt needed more flexibility and found it with Double-Take, data replication software from NSI Software Inc., Hoboken, N.J. The software is installed on servers and moves data independently of whatever other hardware and software is being used. The law firm currently replicates 100 GB of daily changes to its e-mail, human resources, payroll and billing systems using Double-Take.

Thacher Proffitt also looked at Veritas' Volume Replicator, which was similar to Double-Take except that "you had to buy other Veritas software to use Volume Replicator," said Dierk Eckart, director of information technology at Thacher Proffitt.

It's the standalone quality of Double-Take that most impressed Eckart. "With other replication products you have to buy identical hardware or you have to buy more software for support, but Double-Take has no other requirements."

Software versus hardware

Flexible data replication was also a major concern for science publisher The American Institute of Physics (AIP). But unlike Thacher, Proffitt and Wood, AIP didn't think a software-based solution was the best way to go.

Up until this past summer, James Wonder, manager of internet technology and system architecture at AIP, was using a manual Unix command called RDIST in conjunction with homegrown software to copy data to another data center 20 miles away in Garden City, Long Island, N.Y.

in conjunction with home grown software to do replication

But Wonder was finding that the recovery time objective (RTO) for a disaster was 48 hours. "That was not good enough for our customers. Our online content can never be down. We take uptime very seriously and we needed to do better than 48 hours," he said.

Because the company uses Veritas Volume Manager to monitor the disk performance if its StorageTek D178 disk array, it looked at Veritas Volume Replicator for data replication.

Wonder said that although Volume Replicator is a good product, it's a strictly software-based solution, which tends to be inflexible and have a lot of overhead, two things AIP was trying to avoid. "If a product takes one person to run it, then I didn't want it. We didn't want any of the babying that happens with software," he said.

He added, "Software eats up the CPU of your system and fights with other software. It can be labor-intensive and that's not something you want with a small IT staff." Instead he picked an appliance-based data replication product with limited software from Kashya.

The KBX5000 is an appliance that sits on a storage area network (SAN). A small piece of software is installed on the operating system and tells the KBX5000 at one data center what changes have been made to data and then the changed data is moved over an IP network to a Kashya box at the remote location.

AIP installed the KBX5000 in July and the company now replicates 3 TB of data between data centers -- its RTO has improved to under three hours.

One glitch that the IT team at Thacher Proffitt found with Double-Take was weak reporting tools. "We spend too much time analyzing what changes have been made and how much disk has been consumed," said Thomas Young, senior systems architect at the law firm. "Some more automation in the next version would be nice."

This Content Component encountered an error

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close