A greater selection of disk types along with advances in storage hardware, rapid data growth and larger drive capacities are changing the way storage administrators use RAID, influencing RAID rebuild times and constraining performance in some environments, according to industry experts.
Historically, certain drive types have a strong correlation to common RAID levels. Enterprise-class Fibre Channel (FC) drives traditionally employed RAID-1 for basic mirroring, but most FC RAID groups will now use a parity-based approach requiring fewer drives, such as RAID-5. If performance must be optimized, striping can be added to the RAID group using RAID-0 -- creating configurations known as RAID-1+0 or RAID-5+0.
The larger storage capacities being offered by more recent FC disks are also causing users to place fewer drives into FC RAID groups. For example, there may be up to eight 36 GB or 73 GB 15K rpm drives in a basic FC RAID group, but as FC drives have climbed to 146 GB and larger, users are employing only four or five drives in a given group.
Although SATA drives offer lower performance than FC counterparts, they pose a unique problem for RAID because SATA drives are less reliable and capacities of 600 GB and larger can make rebuild times easily exceed several hours. This has forced many Tier-2 storage users to embrace dual-parity schemes, like RAID 6.
RAID 6 is quickly making inroads into storage arrays because of its reliability. RAID 6 is a dual-parity data protection scheme that places two unique parity regions on each disk. This protects against two simultaneous failures in the disk group -- a real possibility with large low-end SATA drives.
"The cost of additional disk to provide this level of protection is not substantial," Wendt said. "And the likelihood of loosing two drives before the first drive rebuilds from a failure is a realistic possibility."
However, according to the analysts, hard drives are so large today that rebuild times cannot be ignored, and administrators must consider the implications of rebuilds on their storage performance and availability.
"This [rebuild time] creates vulnerability (except with dual-parity RAID) and impacts productivity," said Tony Asaro, senior analyst at the Enterprise Strategy Group. "We are beginning to see storage systems with fast RAID rebuild times, taking one to three hours to rebuild 250 GB HDDs [hard disk drives], or even denser, in that time frame."
Fortunately, RAID systems are becoming more intelligent, the analysts say, incorporating more diagnosis and automation intended to reduce management time. Many of these features are not seen directly through the GUI, performing predictive analysis in the background to anticipate disk failures and initiate the rebuild process to an available spare before a failure actually occurs. Additional power management features lower the power budget for large arrays.
"There's more and more of that functionality going in, even though much of it is transparent," said Greg Schulz, founder and analyst with the StorageIO Group.
Over the next 12 to 24 months, the expanded use of RAID 6 and other dual-parity schemes is a virtual certainty as companies keep more data available on low-cost, high-capacity drives. RAID vendors will also come out with "fast rebuild" features that can restore hundreds of gigabytes in just an hour or so, and RAID performance should improve as disk striping expands its role in storage systems, according to the experts.
"On the performance front, we might see more adoption of vertical and horizontal striping," Schulz said, noting that striping would extend across RAID groups -- not just across drives within a group. Wendt said he foresees striping optimizations for SATA drives that might eventually place SATA drives on par with FC drives for random read/write performance.
Users may also see more RAID products that allow LUNs to identify their physical location on a hard drive -- and currently, products like Pillar Data Systems Inc.'s Axiom array already offer this kind of feature.