RAID, as you've most likely noticed, is changing in the enterprise. Fibre Channel (FC) disks still provide the...
By submitting your email address, you agree to receive emails regarding relevant topic offers from TechTarget and its partners. You can withdraw your consent at any time. Contact TechTarget at 275 Grove Street, Newton, MA.
high-reliability and performance that they are renowned for, but a greater selection of disk types and advances in storage hardware are impacting the way that IT departments deploy RAID. Rapid growth and larger drive capacities are also influencing RAID rebuild times and constraining performance in some environments. Storage administrators must understand the changes that are taking place, and position their organizations to make the most of RAID technology. This article examines the effect of new disk types on RAID deployments, outlines the changing implementation issues and problems that you can expect, and highlights the future trends of this essential storage practice.
The impact of disk technology on RAID
All storage systems can benefit from the reliability and performance that RAID offers, but the disks themselves have no direct impact on RAID choices. The ultimate choice of RAID implementation depends on the value of data that needs protecting and the performance needs of any applications using a given RAID group. While drive technologies do not necessitate specific RAID levels, it's clear that certain drive types have a strong correlation to common RAID levels.
Enterprise-class FC drives traditionally employed RAID-1 for basic mirroring, but most FC RAID groups will now use a parity-based approach, such as RAID-5, since fewer drives are required. If performance must be optimized, striping can be added to the RAID group using RAID-0 (e.g., RAID-1+0 or RAID-5+0).
As with other drive technologies, FC drives are offering larger storage capacities, and this is causing users to place fewer drives into FC RAID groups. For example, there may be up to eight 36 GB or 73 GB 15K rpm drives in a basic FC RAID group, but as FC drives have climbed to 146 GB and larger, users are employing only four or five drives in a given group. "This lower number is driven primarily by user concerns about the amount of time it takes to rebuild a failed disk drive on one of the global spares within the storage array," explains Jerome Wendt, independent storage industry analyst.
Although SATA drives offer lower performance than FC counterparts, their low cost and huge storage capacities have enabled a myriad of disk storage technologies, like continuous data protection (CDP), content addressed storage (CAS), virtual tape libraries (VTL) and others. This poses a unique problem for RAID because SATA drives suffer from reliability concerns, and capacities of 600 GB and larger can result in rebuild times easily exceeding several hours. This has forced many Tier-2 storage users to embrace dual-parity schemes, like RAID-6. "The cost of additional disk to provide this level of protection is not substantial," Wendt says. "And the likelihood of loosing two drives before the first drive rebuilds from a failure is a realistic possibility." SAS drives are not commonly deployed in RAID groups, though this may change as SAS technology continues into the mainstream.
The role of RAID levels
A "RAID level" defines the technique that is used to provide redundancy and performance enhancement to the drive group. Numerous RAID levels have been tested and refined over the last 20 years, though only RAID-0, RAID-1 and RAID-5 are commonly used today. Analysts note that RAID selection is largely defined by the need for performance, economy and reliability.
When storage performance is the key priority, users typically turn to RAID-0 and RAID-1. If disk economy is the main criteria, users adopt RAID-5, which uses rotating parity and one additional disk to provide data protection. Parity data is then spread out among the drives in a group. If a disk in the group fails, the data on that disk can be reconstructed from parity data across the other disks. Unfortunately, parity must be calculated on the fly, and schemes like RAID-5 impose a performance penalty on write. "You need compute power to calculate that parity or to reconstruct," Schulz says. "On a normal read it shouldn't be an overhead." RAID-5 can be combined with RAID-0 to boost performance.
RAID-6 is quickly making inroads into storage arrays because of its reliability. RAID-6 is a dual-parity data protection scheme that places two unique parity regions on each disk. This protects against two simultaneous failures in the disk group -- a real possibility with large low-end SATA drives. "It makes enterprises much more comfortable about using SATA drives to house mission critical information that requires capacity and availability but not necessarily performance [such as CAS]," Wendt says. Unfortunately, RAID-6 requires two additional disks, and the write performance penalty is roughly twice that of RAID-5. Storage-intensive tasks using 500 GB to 750 GB drives can usually tolerate the write performance penalty because performance is typically not a high priority. RAID-6 can be combined with RAID-0 to boost performance. RAID-6 is not intended to supplant RAID-5 -- both levels solve different problems in the enterprise.
RAID is not an all-or-nothing choice. In actual practice, RAID levels are often tailored to meet the needs of particular storage tiers in the data center. "These different tiers will typically have different RAID configurations," says Tony Asaro, senior analyst at the Enterprise Strategy Group. "Tier-1 could have RAID-10, and then Tier-2 could use RAID-5."
Go to the next page of this article for implementation issues and future directions