BACKGROUND IMAGE: iSTOCK/GETTY IMAGES
A crescendo of voices in the analyst and vendor communities have declared that RAID technologies are headed to the dustbin of history, alongside tape and mainframes. As with the other aforementioned technologies, the truth of the matter is not so clear-cut.
So, is RAID really dead? No. RAID remains one of the most sought-after features on most storage array products, as it's the quintessential "checklist" feature governing acquisition decisions. Until buyers cease to prioritize RAID on their list of must-haves, RAID technologies will persist.
For now, improvements in RAID strategies, combined with the familiarity of RAID among storage administrators, are likely to support the continued use of RAID in disk storage, despite the increasing availability of alternatives such as erasure coding. Applied judiciously, RAID is still appropriate for most data protection applications.
RAID 1, or mirroring, was an early favorite of storage planners because it provides a mirror copy of every disk drive. However, the cost of doubling up on every disk drive to protect against the possibility of a drive failure was high, driving the adoption of RAID 4 and RAID 5 (which used block striping with dedicated or distributed parity, so that one disk could be used to reconstruct the data from the entire RAID volume). Given the vulnerability of parity methods to bit-error corruption and second (or third) drive failures during RAID rebuild, RAID 1 has come back into view as an attractive alternative.
However, RAID 1 mirroring has been criticized for its susceptibility to failure, especially when it's deployed in a 0+1 configuration. With 0+1, several disks are striped together into sets (RAID 0) that are then mirrored, one for one, to an identical set of striped disks: hence the expression 0 (for RAID 0) plus 1 (RAID 1 mirroring). Statistically, the chances of losing one drive from each set are 4:7 -- and any loss requires a full re-mirroring of the whole set of replacement drives.
RAID 1+0 is increasingly substituted for 0+1. With 1+0, individual drives are mirrored together, and then the drives are striped together to form two volumes. For the RAID set to fail, two drives that are mirrored in RAID 1 must fail concurrently, an event with a low 1:7 probability of occurring. When a single mirrored drive fails, rebuilds (re-mirroring) are relatively quick, since only one physical disk must be re-mirrored.
In the final analysis, there is still some room to maneuver in RAID technologies themselves. Plus, there are external data protection technologies involving other kinds of replication (continuous data protection, for example) that can supplement the protection afforded at the hardware level by RAID. Replicating the data on a RAID volume to another volume could be viewed as a sort of uber-RAID scheme, delivering another layer of protection against RAID set failure. It could be accomplished readily using synchronous replication from one array to another, or by doing snapshots of RAID set deltas (changed data) on a routine basis and writing the snaps onto the redundant storage target.
Another approach is to retain RAID-based data protection at the physical storage layer, but to virtualize storage using a storage hypervisor. Virtual volumes leverage the capacity of the underlying physical infrastructure but enable replication that is independent of specific hardware configurations. This may enable replication across different physical spindles in different arrays, reducing the statistical risk of a catastrophic failure.