Feature

Creating a large e-mail system

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Evaluating the benefits of IP SANs."

Download it now to read this article plus other related content.

Final e-mail system

    Requires Free Membership to View


With servers and storage decoupled, and with clustered servers, this e-mail system provides high availability and performance. Maintenance without downtime is easier.

Near-linear scalability
Another goal for this redesign was to increase scalability so that a new module could be added at any time without disrupting the entire e-mail system. This meant the modules had to be self-contained and independent of each other while working together to service the e-mail customers. This building-block approach is found today in several storage utility models, but at the time was relatively new for an e-mail application. Below is a depiction of the near-linear scaling model.

Each module consisted of two disk arrays. The first array was mirrored with relatively small physical drives and a large cache. It was fitted with the maximum number of front-end channel adapters. Large physical disks were also included in the array for the mirrored copies. In addition to the disk arrays, there were the servers, which included two high-end database servers configured for high availability and another smaller file server. The servers were connected to the database array and message file array, respectively.

A complete module consisted of three servers and two disk arrays with ports reserved for recovery channels. The e-mail modules were built in distinct phases as follows:

Phase 1: Server segmentation. To create the level of scalability needed to handle almost 10,000 new accounts per day, the first task was to decouple the database and message files servers. Once the servers were in place with the storage systems, the next two phases of data migration could proceed.

Phase 2: Database migration. The database migration was performed during a window late in the week when usage was lowest. Once a working database was migrated over, the message files could be moved next.

Phase 3: Message migration. The message file migration was performed during the same window as the database migration. Once the messages and databases were verified as functional, the network was pointed to the new servers, and e-mail operations continued on the new e-mail module.

Phase 4: Recovery.The final phase was to connect the open channel ports for data archival and remote mirroring. This phase was done without any risk of downtime.

These phases were repeated four times over a four-week period to create a scalable system of four e-mail modules.

Further enhancements
The final enhancement to the e-mail system was the addition of several recovery features. First, two volumes for every active volume were left unused for the purpose of providing a mirror.

The first volume would be synchronized with the active data on a two-hour interval. This allowed for an incremental recovery of no more that two hours worth of e-mail activity.

The second volume was synchronized on a 12-hour interval. This volume was for any corruption errors that weren't found in the two-hour window before the first volume would be resynchronized and thus overwritten. This volume could also be used for future asynchronous remote mirroring.

This was first published in July 2003

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: