Remote data replication gets affordable and more robust, pg 2

In 2005, Hurricane Katrina underscored the vulnerability of local data and the importance of sending data offsite. Remote data replication duplicates data between remote servers or storage platforms across a wide area network (WAN). Remote replication has been around for years, but advances in technology have finally made it mainstream. Let's look at the elements of remote replication, the pitfalls to consider and examine its impact on the storage organization.

Continued from Page 1

The impact of remote data replication

Not only has remote replication become more efficient and affordable, but its use has also changed in the enterprise. Replication technology serves the need for business continuance (BC) and disaster recovery (DR), ensuring that a recent data copy is always available for restoration in the face of corruption or physical damage. While this is still a primary application, replication technology is evolving and expanding into other uses.

Remote Replication Information

Tech Report: Replication appliances

Prepare for next-generation backup

Remote replication gets out of the array

 

One fundamental shift is the emphasis on continuance rather than restoration. Restoration takes precious time, and companies are deploying hot sites that either run in parallel with the main data center, often taking on some of the network load, or can step in quickly to assume network processing tasks if the main data center goes down. "It's true business continuance," Schulz says. "Not just for DR but to keep the business running so we don't have to recover -- just keep running."

Consolidation is another emerging use for remote replication, concentrating the data from remote offices and mobile users back to the main data center; a "many-to-one" replication scheme. This tactic eliminates tape hardware and error-prone backup procedures at remote sites. "According to our estimates, about 30% of an organization's mission-critical data is offsite at remote offices and branch offices [ROBOs]," Biggar says. Wide area file services (WAFS) also play into remote office consolidation, but WAFS is a means of accessing centralized data remotely -- WAFS is not a remote replication technology per se.

Finally, remote replication is finding a place in data migration tasks. For example, move a data copy from main production storage to a secondary storage system for backup, then use the data copy for business analytics, data warehousing. Copies can also be used for in-house lab testing to verify the behavior of a new application version without jeopardizing production data.

Remote data replication eliminates tape problems

You might not expect disasters amidst the Great Lakes region of North America, but IT organizations must guard against a variety of serious risks that include power disruptions, user errors and variable water levels. "In Minnesota our biggest disasters are tornadoes," says Tom Becchetti, senior capacity planner at a major financial services organization in St. Louis Park, Minn. "And you don't want to have your [main or backup] sites located along known tornado paths."

Protecting over 80 terabytes (TB) of data against disaster posed an almost insurmountable problem for tape backup technology. "Backing up to tape is a very complicated process for recovery, and we couldn't meet the business needs for a timely recovery to stay in business," Becchetti says, noting that it was extremely difficult to maintain file synchronization with a conventional backup methodology in his busy transactional environment. The need for faster backups, more reliable recovery and simplified operations prompted Becchetti to implement remote replication to a site located within 25 miles of the main data center.

In addition to two 1 Gigabit per second (Gbps) Ethernet links across a SONET ring, Becchetti uses two 2 Gbps Fibre Channel dense wave division multiplexing (DWDM) connections through a local carrier. "We're actually using the Fibre carrier's site for our second site," he says. "So for security reasons, we don't have to worry about encryption." Mainframe data is replicated using the disk subsystem's Symmetrix remote data facility (SRDF) functions. Open systems are clustered and replicated using Symantec Volume Manager and Cluster Server.

Early difficulties in implementation have largely been resolved, and replication has brought impressive results; yielding much faster performance in testing. "It was a learning curve, but it [recovery testing] went vastly quicker than any of our past recovery tests -- I mean a 50-1 difference," Becchetti says. However, rapid storage growth is already challenging the replication effort, and Becchetti urges a careful consideration of the company's storage growth during the planning phase. "I always double what I think it [remote storage] should be, and it's never enough."

Remote data replication eases operational issues

Banks are particularly sensitive to the impact of disaster on daily business and customer service, so remote data replication is often an ideal fit for business continuance and disaster tasks. For Absa Group Ltd,, one of South Africa's largest commercial/retail banks, there are three main goals: guard against the threat of natural disaster, prevent data loss through human error and meet banking rules imposed by the South African Reserve Bank for transactional system disaster recovery.

Absa's primary data center in Johannesburg currently supports 300 TB of data spread across several storage platforms. Currently about 30 TB of mission-critical data is replicated between EMC Symmetrix DMX-3 systems using SRDF/A. A dual-replication scheme ensures redundancy and rapid recovery. Data is replicated synchronously across private fiber to a local facility just 500 meters (0.31 miles) away. Data is also replicated asynchronously across multiple fiber connections, establishing a redundant path to a remote replication facility 41 kilometers (25.47 miles) distant. The CWDM fibers offer significant growth potential for future replication needs. "We've got four 2 Gbps fibers for this solution, and we're not even pushing that at all," says Jan van Loggerenberg, storage architect at Absa. "There's not even 20% utilization on those fibers."

Testing is performed periodically but handled in a piecemeal fashion where each component is checked rather than the whole -- Absa does not impose on the live production network. "If all these components of the test work, the restoration will work," van Loggerenberg says. "We find that the possible production impact it will have to do a live failover [sic] is not worth the result of the test."

Prior to replication, the operational requirements of backup and recovery for disaster recovery proved very extensive -- a problem that replication quickly solved. "A DR test could take up to a week and very seldom was really successful," van Loggerenberg says. "With replication, a DR test is performed in hours with very high success rates." Although the initial cost of implementing remote replication was substantial, and bandwidth presents a recurring cost, the reduction in operating and resource overhead plus the testing success rate should be the real measure of value. "As with most automation projects, your initial outlay is quite high, but your operational savings over time is where the benefit pays off," van Loggerenberg says.

The future of remote data replication

Over the next 12-24 months, analysts predict that bandwidth should continue to get cheaper and more available -- though any improvements will likely be absorbed immediately by larger data volumes. More noticeable changes should appear in the functionality of remote replication products. Expect improved interoperability between heterogeneous storage systems and better integration with other storage products like snapshots, CDP and other disk-to-disk data protection products. This will allow less software to handle more functions across a larger and more diverse infrastructure.

Other improvements will include better multivolume and multiarray consistency, reducing management worries when replicating for large, complex applications. Also expect broader support for additional topologies, like one-to-many and many-to-one replication. This will allow a data center to replicate to several locations at once, allowing multiple remote locations to replicate back to a central point. Expect continued improvement to bandwidth optimization techniques, including better compression, deduplication and latency reductions. "Today I think replication for the masses is feasible from a functionality and consistency point of view, and also from a pricing perspective," Taneja says. ***

Return to beginning

 

This was first published in August 2006

Dig deeper on Remote and offsite data storage

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close