Feature

Disaster Recovery Extra: Distance your data from disaster

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: How to distance your data from disaster."

Download it now to read this article plus other related content.

Even if some sort of host-resident agent is required, network-based replication succeeds in offloading the bulk of the processing from the host while giving users the freedom to replicate between dissimilar storage.

Take, for example, Santa Clara, CA-based startup Topio Inc. and its Topio Data Protection Suite (TDPS). In a nutshell, the idea is to take the brunt of the processing off the primary system and to perform the "heavy lifting" on a dedicated server at the remote site, explains Chris Hyrne, Topio's vice president of marketing. TDPS requires an agent on the host at the primary site, but, Hyrne insists, it's an extremely lightweight agent. Specifically, the agent's job consists of intercepting writes, time-stamping them and sending them over the wire to the recovery server, where the writes are reassembled in order.

That architecture works well when replicating applications that consist of multiple servers, for example, an Exchange server plus its associated domain controller, says Hyrne. To recover the application, "you need to recover both of those servers to the exact same consistency point," he says. Hanson Brick & Tile's Moran is currently using TDPS to replicate a single Oracle database, but is considering extending it to replicate Exchange and some software development servers.

With the additional processing power that comes from inserting a device in the network, replication startups have also experimented with including

Requires Free Membership to View

features that might otherwise consume too much CPU in an array- or host-based configuration. For example, Kashya uses storage and bandwidth-reduction technologies that can cut down the amount of data that travels over the wire, which keeps telecommunications costs down.

InMage's Tirumala says another benefit of having a network-based appliance doing the replication is to insulate replication from WAN outages. "If you're taking a traditional host-based approach, the deltas are buffered by the host; if there's a traffic issue, that can cause problems," he says. "With us, our appliance is doing the buffering outside of the production environment."

Running replication from the network has a lot of advantages, but market forces may slow down the movement to bring more intelligence to the network. "I firmly believe that, ultimately, replication will be done primarily from the network—that is where it makes the most sense," says Taneja of the Taneja Group. But entrenched array and host vendors' existing replication businesses are "too large" and the margins "too juicy" for them to actively push alternative approaches. The transition, says Taneja, "is going to take a lot longer than logic would dictate."

This was first published in May 2006

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: