Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Adding low-cost tiers to conserve storage costs."

Download it now to read this article plus other related content.

Two-tiered backup architecture

Requires Free Membership to View

The introduction of two-tier backup architectures allows much of the load associated with moving data from the client to the backup target to be offloaded to dedicated data movers (storage nodes/media servers).

Two-tier backup
In a traditional client/server backup architecture, data sent from a client to a backup server moves through the backup server to the target devices (see "Two-tiered backup architecture"). In traditional IP-based architectures with a large number of clients, a tremendous amount of data will pass through a single backup server. Backup servers' CPUs, memory, NICs or internal I/O buses are frequently maxed out in larger environments.

The introduction of two-tier backup architectures allows much of the load associated with moving data from the client to the backup target to be offloaded to dedicated data movers (storage nodes/media servers). The centralized backup server is still responsible for managing all of the metadata and shared library/robot control. The NDMP protocol (see "NDMP speeds backup traffic") allows network-attached storage (NAS) appliances to act as data movers, minimizing backup generated LAN traffic.

At the far end of the backup data path, tape drives are often the focus of backup bottlenecks. Newer tape drives are fast, with some exceeding 30MB/sec. It isn't uncommon for a tape drive to achieve higher throughput rates than disk drives. But achieving maximum tape drive throughput depends on sufficient amounts of data being sent to the tape drive to sustain data streaming. If insufficient data is sent to the tape drives back-hitching will occur, greatly reducing overall throughput.

If too much data is sent to the drives, then the drive once again becomes a bottleneck. The amount of data written to a tape device is usually controlled by adjusting the number of simultaneous write sessions (also called multiplexing) to each tape device (see "Disk-based backups are more forgiving"). The downside to multiplexing is that restore performance is decreased because backup sets are interleaved on the tape.

A frequent backup mistake is letting backup clients temporarily mount tape drives through a shared target software option in an attempt to improve backup throughput. Typically, the throughput for the one client is improved because the tape drive is temporarily dedicated to a client. This eliminates tape drive contention and allows data to be moved to the target tape drives through the FC SAN, while eliminating the IP processing-intensive overhead. But by improving backup speed for one client may decrease overall throughput to the tape drive because only a single backup client is writing to the drive. In this situation, the greater good of all systems may be sacrificed to benefit a few systems.

This was first published in August 2004

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: