Managing mixed RAID arrays
Users who plan to keep their arrays for a long time or add capacity to existing arrays run into other issues. For example, the capacities of storage arrays double about every 12 to 18 months because of steadily increasing disk capacity. With 146GB FC drives already shipping and 292GB FC drives expected soon, tracking the RAID array configuration of each controller and optimally placing data on these disk drives becomes more of an art than a science. But even though disk capacities are growing, disk speeds have reached a 15,000 rpm plateau--exceeding this speed results in the outer edges of the disk spinning faster than the speed of sound, which will damage the disk.
If a new disk is installed, to optimize the performance of the array you need to redistribute the existing data on the storage array to this new disk. This is easier said than done because once you add new controllers with new disk to the array, all new and more frequently accessed data gets written and stored on this new disk that could have lower seek times and could potentially become the bottleneck in your system. Compounding the problem, as disk arrays age, storage manufacturers stop offering older and potentially better performing disk media because they have lower capacities but higher speeds. So rather than one controller with a faster disk housing 200GB of infrequently accessed data, a new controller now houses higher capacity
Many array vendors tell users they can buy the disk capacity they need now and add more disk when they need it. The degree of difficulty in managing this scenario depends on the number and types of hosts attached to your array, and the types of tools available to manage the storage system. If an administrator understands the performance characteristics of existing disk drives, can manage the volumes from the host level and can unobtrusively move the data within the array, this approach works fine. But new data is often placed on the new disks without reorganizing and optimizing the data within the array, creating performance problems. And when new disks are added to the array, it's important to assign the correct RAID level. It only takes one mental lapse or incorrectly set policy for the wrong data to be allocated to the new disks or an incorrect RAID configuration to be assigned to the new disks.
Storage companies employ a number of methods to address these issues. Both EMC and EqualLogic have so-called self-healing software in their arrays that examines the workload and performance of the disks. When the application detects a hot spot on a disk, it identifies another area in the same array where it will be less likely to be overtaxed and moves the data from the problem area to the new location.
Some vendors offer the ability to change RAID configurations on the fly. For instance, IBM's Shark allows administrators to change the RAID setting on an individual controller from RAID 5 to RAID 10 or vice versa. You may want to use this approach if you're not getting the sort of performance you expect with your existing RAID configuration.
However, you need to take some precautions when using these types of tools. For example, any data on the disks behind a controller whose RAID configuration has been changed will be destroyed. And the tool may not warn you whether any of the volumes on that controller are assigned and in use, so you should verify that all of the volumes are unassigned and the data is no longer needed before changing the RAID setting. Finally, for administrators who spread their data across multiple back-end controllers, it becomes challenging to free up all of the disks behind a particular controller so that the RAID configuration can be changed, especially if multiple hosts with different operating systems are attached to that array.
When using mixed RAID, keep a close eye on how data gets dispersed across the back-end controllers and the size of the disks you plan to add. These two elements will have a huge impact on the performance of the array.
This was first published in November 2004