Backup-to-disk performance tuning


This article can also be found in the Premium Editorial Download "Storage magazine: Upgrade path bumpy for major backup app."

Download it now to read this article plus other related content.

Lessons learned
We learned the following lessons during implementation, testing and debugging:

  • Watch for poorly configured production storage.
  • Disperse I/O across the SATA array with more RAID groups and LUNs.
  • Plan for adequate downtime for production servers when adding new backup storage.
  • Use striped storage volumes at the host layer for backup storage.
  • Enable active/active for pathing software for backup storage.
  • Benchmark several different storage configurations before pushing the backup solution into production to validate performance.
  • Restore speeds for disk should be factored at 1.5 times the speed of the backup vs. 0.5 times when using tape.
The primary hurdle for this project was the existing production storage configuration. There were bottlenecks--several large file systems composed of just a couple of very large LUNs--in the way production storage was allocated to some of the servers on the SAN. I anticipated this would be a minor problem for the overall backup performance, but still expected to achieve decent performance. Instead, this proved to be the top performance limitation. Even servers that had high-end storage on state-of-the-art Fibre Channel drives couldn't push more than 6MB/sec for multiple terabyte file systems on one

Requires Free Membership to View

or two LUNs. As a result, these file systems were reengineered with more LUNs for each file system. The new file systems went from pushing a mere 6MB/sec to pushing approximately 60MB/sec to 100MB/sec.

For optimal performance, LUNs created on the HDS 9570V for backups and allocated to backup servers would have to be spread over different RAID groups across the entire array to maximize I/O performance. By creating more RAID groups and LUNs, we doubled performance. We tested configurations using five RAID groups with five large LUNs, and 10 RAID groups with 10 LUNs. We found that more LUNs yield better I/O performance (see "How the number of LUNs affects performance," below).

This was first published in September 2006

There are Comments. Add yours.

TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: