Trina MacDonald, Trends associate editor
Published: 15 Nov 2006
As more users rely on remote replication as the cornerstone of their disaster recovery (DR) procedures, it's becoming clear that replication and database optimization don't always go hand in hand, particularly when it comes to Microsoft Exchange.
To reclaim unused space, users must take Exchange databases offline for defragmentation and compaction. Further complications arise in a replication scenario when database changes must be mirrored remotely, which increases traffic. So far, replication vendors can't offer clear remedies.
To get around this, some companies are splitting up their archive and replication processes. For example, putting mailboxes that must be archived on other Exchange servers separate from those marked for replication, according to Brian Babineau, an analyst at the Enterprise Strategy Group. "This is a piecemeal solution," Babineau said. "It's still, fundamentally, a problem that hasn't been solved."
Double-Take offers an alternative by letting users disconnect replication. If the database must be taken offline, replication can be reconnected after defragmentation and other database maintenance takes place. Double-Take will then remirror the data using a block checksum comparison and transmit only the blocks that have changed.
Microsoft is recommending its Cluster Continuous Replication (CCR), a new feature in Exchange 2007, which will ship by year's end. Using log shipping technology, CCR allows an active node in a Windows Server 2003 Cluster to replicate to a passive node in the same cluster, which can be geographically distant. Instead of performing offline defragmentation, Scott Schnoll, technical writing lead for Exchange user education at Microsoft, suggested using Exchange's Move Mailbox feature. This feature creates an empty database and the old database can be deleted once all mailboxes are moved into the new one. The clean database is then replicated remotely.
"As new data is added from the Move Mailbox operation, it's all being written in contiguous blocks," Schnoll said. "The interruption in service is only while each individual's mailbox is being moved." He added that this online operation can be scheduled.
None of these solutions offers complete satisfaction, particularly for very large enterprise environments. One user, a senior systems engineer at an international communications company, is in the midst of implementing a disaster recovery plan for 43,000 Exchange users worldwide with approximately 13 terabytes (TB) of data. As he sees it, the ideal solution for keeping database copies clean would be for a vendor to create its own Exchange defragmentation engine that could run at the primary and disaster recovery sites simultaneously. "But that would definitely make Microsoft really unhappy," he said.