RAID might not be the best choice for high-capacity drives. It's time to rethink your rebuild strategies.
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
There's a lot of talk about shortening rebuild times for large capacity disk drives in today's storage environments. Fast rebuild technology is widely deployed nowadays, but plenty of users still don't think in terms of hardware RAID and individual drive rebuild times. And here's a new angle on the discussion: perhaps the best way to shorten rebuild times is to not have to rebuild in the first place.
Roughly 50% of failed SATA drives returned to vendors result in a diagnosis of "no trouble found" and are returned to service as replacement drives that typically function like new. That's because SATA drives were originally designed for lightly loaded desktops and laptops, rather than high-performance enterprise arrays, and they occasionally experience slowdowns in performance that result in a disk being diagnosed as non-responsive. As a result, several vendors have introduced technology to diagnose these issues and determine if the disk is actually failing or if it's just an intermittent slow down. This technology is important to understand because it reduces the risk of data loss due to the potential of a second drive failing during rebuild.
Before deciding on which approach or vendor best addresses your RAID rebuild challenges, let's look at how we got here. The term RAID, or redundant array of independent (or inexpensive) disks, was introduced in the late 1980s to describe a method of protecting disk drives in an array. Despite the standardization efforts of the now defunct RAID Advisory Board, most vendors developed protection schemes that met basic RAID definitions but varied widely in their implementations. No matter the strategy, RAID rebuild times across the board get longer as disk drive size increases. That's because there's more data to copy or rebuild from parity. In the event of a single disk drive failure in most RAID modes, data is left unprotected until the RAID rebuild is finished, and RAID rebuilds suck up significant processing power.
However, there are ways to keep data protected in the event of a single disk failure. Users can implement dual-parity RAID 6 -- which keeps data available in the event of a dual drive failure in a single RAID group -- or go as far as implementing remote mirroring technology to protect themselves not just from a drive failure but to keep data available in the event of a full site failure. But there are costs associated with each layer of protection added, and these need to be balanced against the value of the data to be protected; the overhead required to allocate capacity for data protection is, in some cases, three to four times the amount of data stored.
With the advent of high capacity TB-size Serial ATA (SATA) disk drives, the problem is compounded. SATA drives spin at less than half the speed of Fibre Channel (FC) drives, but hold up to 1 TB (twice the capacity of FC drives). The density of the drive doesn't make up for the slower rotation speed, however; average latency for a 7,200 rpm disk drive is more than two times the latency of a 15,000 rpm drive. With TB SATA drives, rebuilds could extend for multiple days, depending on how busy the system is, and become onerous to the point that they have an unacceptable impact on the business. There are significant cost advantages to storing data on large capacity drives: the price per MB is much less than high-performance FC drives and, thanks to their price advantage, SATA drives have been widely deployed in archive systems and scale-out storage architectures while higher performance FC drives have continued to hold court at the top storage tiers.
New data protection schemes
Storage vendors are finally beginning to understand that it's not about protecting disks but protecting information, and their data protection schemes are evolving to reflect this. There are some novel approaches in the market to solving the problems produced by large, slow drives. Some technologies reduce the overall number of rebuilds a system performs. Some have shifted to information-based protection schemes in which, rather than mirroring a disk, they mirror information (files, chunks or objects). Some even do a little of each. So how does this impact rebuild times? When you think in terms of rebuilding information rather than a single disk, you can put the power of the system architecture to work, leveraging the massive parallelism opportunity presented by multidisk architectures.
There are several technologies in the market today that reduce the overall number of drive failures, and thus the number of rebuilds required. In some instances, vendors take unresponsive drives offline to diagnose problems and return them to service if no trouble is found. This is a great approach, as it eliminates the need to perform a full rebuild. When the drive goes offline, the system journals all writes that would have gone to that drive while attempting to recover the drive. After a successful recovery, only the data in the journal is required to be rebuilt, not the entire disk.
Some vendors have a two-pronged approach that reduces the overall number of rebuilds required and speeds rebuild time leveraging grid storage architectures. One approach kicks in when a drive doesn't respond immediately to an access request. The system responds by doing a mini parity rebuild of the requested data and returning the rebuilt data while taking the non-responsive drive temporarily out of service. This drive then undergoes a brief diagnosis and is returned to service, thereby eliminating the need for a rebuild. Any data written while the drive is offline is written to other available space in the system.
This also speeds rebuilds by putting its grid architecture to work. Most grid-based architectures have capacity or storage nodes and separate processor nodes. Typically, all processor nodes can access all capacity nodes. When data is written, it's broken into a number of fragments. These fragments are then distributed across as many storage nodes as are in the system. Using a default of nine data fragments and three parity fragments (the exact number of parity fragments is user configurable), each of 12 storage nodes would get a fragment. If there are four storage nodes (the minimum configuration), each node gets three fragments. In the event of a drive failure, the data from that drive is rebuilt, just like in conventional hardware RAID. But unlike conventional RAID, data isn't rebuilt to a single drive; the data is redistributed across the storage nodes leveraging any available storage capacity. If an entire storage node fails, the data from those drives is rebuilt across the remaining storage nodes. We've seen this type of technology implemented for both parity-protected data and mirrored data. Thanks to protecting data rather than disk drives, as well as the power of a grid architecture, rebuilds happen in a fraction of the time it would take for a conventional drive rebuild. It's the information that's being rebuilt, not the exact drive layout.
Other vendors seek to leverage their architectures to speed rebuild time and reduce the risk of data loss if multiple drives fail. When a file is written, the data and parity is distributed across the available disk drives in the cluster. In the event of a drive failure, the data required for a rebuild is spread across multiple nodes in the cluster, so drives across the entire cluster are leveraged.
Shifting data protection strategies from a hardware-based approach to a software-based approach creates new possibilities. With a hardware-based protection scheme, the choice is often between protecting all of the data or none. Information-based protection opens the door to the possibility of more granular, policy-based information protection.
The bottom line is that different storage characteristics are required for various data types. Hardware RAID schemes continue to be a good solution for lower capacity, faster drives and won't go away any time soon. But it wouldn't be surprising to see information-based data protection schemes become more mainstream in tier 1 storage products over time, as vendors continue to simplify administration and build information-centric systems.
There are plenty of vendors offering information-based data protection schemes or rapid rebuild technology. Even in a tough economy, the number of vendors offering technology that accelerates or reduces the need for rebuilds seems to be growing. Remember that when you're evaluating technology that leverages high-capacity commodity disk drives, you should ask your vendor what they're doing to reduce your exposure to data loss during rebuilds.