Feature

Pump up array performance

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Boosting data storage array performance."

Download it now to read this article plus other related content.

Some subsystems let you select the RAID segment or chunk size. For high-throughput workloads, if you have a predictable transfer size you can set the RAID segment size equal to or greater than your transfer size and gain some optimization of I/O performance. RAID stripe size is another parameter that some subsystems allow you to configure using RAID x+0 (or x0) levels. The +0 indicates that a LUN can span more than one RAID group. For example, you have two RAID 5 groups with a three data and one parity drive configuration using RAID 5+0 with two RAID groups; this requires six data drives to support this LUN. However, one RAID 5 group with six data drives and one parity drive can support it equally as well. On some subsystems, a RAID group is spread across all spindles in a subsystem so it is, by definition, an x+0 RAID level. Larger stripe size helps when dealing with "hot" LUNs. Hot LUNs exhibit more I/O activity than the typical LUN in your subsystem. By spreading the LUN's workload across a number of RAID groups or a larger RAID stripe size (and therefore more spindles), you get more balanced activity against all drive spindles.

Runtime performance tuning
Some cache settings may be specified during configuration, but most cache settings should be left to default values and then tweaked at runtime once actual I/O workloads can be observed. Some subsystems have cache settings that can be assigned to LUNs, some have cache settings only for the subsystem level

Requires Free Membership to View

and others are somewhere in between. The cache options discussed later in this article may not apply to a specific LUN, but may need to be specified for higher LUN groupings instead.

Write back vs. write through
There are a number of ways to tune the subsystem cache to optimize I/O performance. For high-write workloads, write-back caching can be an effective technique to optimize performance. For highly sequential, high-throughput and high-write workloads, however, disabling write-back caching (enabling write-through) can improve performance. In writethrough mode, write data bypasses the cache and goes straight to the subsystem's disk.

Another cache option is write mirroring, which keeps controllers in a cluster pair in synch with respect to the disk images they maintain. Write mirroring can impact performance for high-write workloads because all write data has to be transferred to the other controller's cache. There are also data availability implications to disabling write mirroring that need to be considered.

For a highly sequential workload, read-ahead caching is a must. For these workloads, it's important to specify a read-ahead cache amount that's roughly equal to the data transferred during the time it takes to perform one disk-read operation (for 15K rpm FC drives at 2Gb/sec, this could be 1.5MB to 3MB). Most highend subsystems optimize this value in real-time, taking into consideration current I/O workload characteristics and cache size.

This was first published in January 2006

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: