RAID technology adds reliability and overcomes concerns, page 2
RAID implementation issues
When considering a RAID deployment, it's important to determine just where RAID functions are performed because this traditionally influenced the performance of your RAID system. For example, storage arrays typically use a storage controller (a.k.a. RAID controller) to implement RAID features for the array, drawing instructions from firmware on the controller itself.
By comparison, RAID can be implemented at the server using a hardware adapter card from manufacturers like Adaptec Inc. or LSI Logic Corp. RAID functionality is now commonplace on the server's main board (a.k.a. motherboard). RAID at the server is typically operated through a volume manager or through the operating system. Servers generally use RAID to mirror the boot drive. "Users will employ a SCSI controller to mirror two internal disk drives and keep core operating system information on these disk drives and use them to boot the server," Wendt says. Analysts also note that high-end features, once seen only in storage arrays, are migrating down to adapter cards and server motherboards.
Analysts recognize an expanding role for software RAID in higher level functions like migrating data between RAID levels or converting levels while in use. "I see users employing RAID-1 software mirrors at the server level to move data nondisruptively from one local storage device to another," Wendt says. He notes a trend toward using volume management software on storage arrays or network controllers to present a single logical unit number (LUN) to Windows servers -- allowing the operating system on the storage array to manage the software RAID configuration.
Changes in RAID configuration/management
RAID management can be a challenging exercise. Tracking the RAID levels in each array, monitoring the disks within a RAID group, and keeping adequate spares in place can tax even the most experienced storage administrator. But the tools and tactics of RAID management are systematically improving.
Storage virtualization offers a significant benefit to RAID. "We see internal storage virtualization, creating logical pools of storage, making it easier to change RAID configurations on the fly," Asaro says. Virtualization also improves storage utilization and allows data to be moved or organized based on tier performance or criteria other than physical location. [Read the Special Report on storage virtualization].
RAID systems are becoming more intelligent, incorporating more diagnosis and automation intended to reduce human management time. Many of these features are not seen directly through the GUI but offer more self optimization and predictive analysis to anticipate impending disk failures and initiate the rebuild process to an available spare before a failure actually occurs. Additional power management features lower the power budget for large arrays. "There's more and more of that functionality going in, even though much of it is transparent," Schulz says.
Addressing RAID concerns
RAID technology is well established and proven, so most technical issues have been resolved for a long time. However, there are still lingering concerns that deserve serious consideration from storage administrators.
Hard drives are so large today that rebuild times cannot be ignored, and administrators must consider the implications of rebuilds on their storage performance and availability. "This [rebuild time] creates vulnerability (except with dual-parity RAID) and impacts productivity," Asaro says. "We are beginning to see storage systems with fast RAID rebuild times, taking one to three hours to rebuild 250 GB HDDs [hard disk drives], or even denser, in that time frame."
Storage administrators must also consider the behaviors of their RAID controller failover configuration. If one RAID controller fails, it's important to know how the corresponding storage will be affected. For example, an active-active configuration may support immediate failover, but an active-passive configuration may demand up to a minute to switch the LUN to the alternate controller -- leaving the RAID group on that LUN temporarily inaccessible.
Finally, LUN management continues to be problematic because not all storage arrays support LUNs in the same way. Analysts like Wendt suggest the use of volume management options on the storage array to create larger LUNs by combining LUNs of different sizes. Alternately, a volume manager can help stripe data across multiple LUNs of the same size so that a server sees only one LUN.
The future of RAID
Although there are no dramatic new levels or technologies on the horizon for RAID, analysts point to a variety of interesting developments to watch for over the next 12 to 24 months. First, the expanded use of RAID-6 and other dual-parity schemes is a virtual certainty as companies keep more data available on low-cost, high-capacity drives. Look for RAID vendors to support "fast rebuild" features that can restore hundreds of gigabytes in just an hour or so. Improved disk diagnostic features should offer more reliable predictions of impending drive failures, allowing the rebuild process to begin before an actual fault occurs -- making disk failures almost transparent.
RAID performance should improve as disk striping expands its role in storage systems. "On the performance front, we might see more adoption of vertical and horizontal striping," Schulz says, noting that striping would extend across RAID groups -- not just across drives within a group. Wendt foresees striping optimizations for SATA drives that might eventually place SATA drives on par with Fibre Channel drives for random read/write performance.
Finally, users may eventually see RAID products that allow LUNs to identify their physical location on a hard drive. Wendt says that such technology can help users establish a better balance between performance and capacity. "Users can place applications with higher performance requirements on the fastest part of the disk drives on the RAID group while still allowing them to utilize the higher capacities on disk drives for lower performing applications or reference data," he says.
27 Jun 2006