Modern storage arrays offer disk types to meet any need--costly Fibre Channel (FC) disks for high-end applications...
requiring superior performance and availability, and lower-priced SATA disks for less-critical data. The arrays also come with mixed RAID configurations. But selecting the right mix of disks and RAID levels requires understanding the impact of those decisions.
Arrays with mixed RAID support allow users to optimize application cost, performance and availability on a single array. With storage management an ever-escalating cost, the ability to select a single vendor's array to satisfy a multitude of storage scenarios is enticing. In addition to the different types of disk, the new arrays offer simplified management, ease of data migrations from one tier of disk to another, and the ability to increase capacity or improve performance on the fly.
|Is RAID 5 all you need?|
Despite the fact almost every storage array vendor offers at least one array that supports multiple types of RAID, vendors don't agree on whether you need more than just one RAID configuration. Nexsan Technologies says RAID 5 is best in almost every data protection situation. EqualLogic Inc. disagrees: It offers RAID 10 or RAID 50, but not RAID 5. Xiotech Corp. wants to see RAID 6 come on the market.
RAID 5, however, continues to gain momentum as the default standard for FC and serial ATA (SATA) disk (see "Is RAID 5 all you need?"). Users find it provides acceptable levels of data protection, disk utilization and performance for most applications. But there are still times when users will want to consider using other RAID levels, especially when using SATA disks. In cases where multiple RAID types are used, administrators need to understand how and where these RAID types get placed internally within the array and the risks associated with mixing and managing multiple RAID types on a single array.
Knowing which RAID implementation to deploy depends on a range of factors:
- The number of front- and back-end controllers on the array
- The amount of cache
- The disk capacity behind each back-end controller
- The performance requirements of the application using the disk
- The speed or rpm of each disk behind each controller
Industry benchmarks and vendor documentation will provide statistics and information on array cache, I/O capabilities, the number of front-end FC and iSCSI interfaces, back-end controllers, the type of disk used and internal architecture. Once administrators gather these facts, they can determine the best RAID configuration for their environment.
The right RAID
Storage array vendors allow RAID configurations to be set on each controller or parity group that sits in front of the disks. Using array management software provided by the storage vendor, users can log in and configure any controller on the array with any of the RAID settings the controller supports. Users may also change an application's underlying RAID configurations on the fly, assuming they have the vendor's licensed software and a spare disk group. Arrays such as EMC Corp.'s Clariion, IBM Corp.'s DS4000 and Hitachi Data Systems' (HDS) TagmaStore offer software that allows users to move the data from a disk group configured as RAID 1 to a disk group configured as RAID 5 without application downtime.
Yet selecting a specific RAID type is becoming less of an issue on high-end monolithic and modular arrays as users increasingly choose RAID 5. HDS reports that more than 85% of its arrays now get configured as RAID 5 because users find that RAID 5 provides acceptable tradeoffs between availability, capacity, data protection and performance when compared to other RAID configurations. However, not every storage array vendor implements RAID 5 the same way. Here are some examples of how they differ:
- Modular models sold by BlueArc Corp., Hewlett-Packard Co., IBM and Silicon Graphics Inc. (SGI) use RAID controllers supplied by Engenio Information Technologies, which offers two different types of disk controllers, the 5884 and 28XX models. The 5884 controller is ASIC-based and used primarily with FC disks. Engenio bases its lower-end 28XX models on Intel Corp.'s XScale chip, and it's used primarily in its SATA arrays.
- The newest RAID controllers soon to show up on IBM's DS4000 will support Emulex Corp.'s switch-on-a-chip technology. This approach provides a dedicated path between the controller and each disk drive, as opposed to a shared path between the controller and all of the disk drives behind it.
- An increasing number of arrays now support global hot spares. These are disk drives not tied to any disk group that can replace a disk in a RAID 5 configuration that fails.
- HDS is among a growing number of vendors implementing RAID5+, where parity is striped across all of the volumes in the RAID group. This helps to eliminate most of the write penalty associated with RAID 5.
Users need to carefully weigh the pros and cons of using RAID 5 with anything other than FC disk drives. While both FC and SATA drives advertise similar mean time between failure (MTBF) rates of around 20,000 hours (just over two years), these aren't apples-to-apples comparisons. The MTBFs for FC drives are based on rigorous 24-hour duty cycles, whereas SATA drive MTBF rates are based on two- to four-hour duty cycles. So in environments where SATA drives experience more extensive use or are in more performance-intensive environments, higher disk failure rates can be expected.
Fortunately, vendors of SATA-based arrays recognize this deficiency and provide RAID configurations that help to overcome SATA limitations. For instance, EqualLogic's PeerStorage 100E doesn't even give users the option to implement a basic RAID 5 configuration, offering only RAID 10 and 50 options. Behind each of its controllers, EqualLogic puts a total of 14 SATA disks, of which only 12 are active. The 12 active disks are split into two separate groups of six. Each group of six is then configured in a 5 + 1 configuration, so that any of the six disks in either group can fail without data loss. In the event of a disk failure, one of the two passive disks becomes a member in that parity group.
The RAID 10 and RAID 50 configurations EqualLogic offers increase the level of data protection. RAID 10 will perform better for read-intensive applications because it mirrors the data on the first group of six disks to the second group of six disks, so that one group or the other may be lost without impact. This level of data protection is expensive because only 1.5TB, or 43% of the available 3.5TB, on each controller is usable. A better option for EqualLogic arrays is to choose its RAID 50 option. This configuration stripes data across both groups of six disks, yielding a more reasonable 2.5TB of usable storage, or 71% of the 3.5TB total, and will provide adequate performance in most environments.
While users find the high capacities and low price points of SATA arrays appealing, they need to consider how these disks get cooled, as well as the impact and difficulty of replacing a disk in the RAID configuration when it fails. For instance, many SATA vendors use a vertical midplane architecture in an attempt to pack as many disks into a blade as possible to maximize capacity. But this approach results in active disks running at the high end of their optimal temperature range, and when they inevitably fail, they're difficult to replace. For instance, Storage Technology Corp.'s (StorageTek) BladeStore array requires an entire SATA blade to be taken offline to replace a single disk, requiring either downtime for all applications using data on that blade or, to avoid downtime, moving all of the data on the blade to another blade on the array to replace the faulty disk.
Managing mixed RAID arrays Users who plan to keep their arrays for a long time or add capacity to existing arrays run into other issues. For example, the capacities of storage arrays double about every 12 to 18 months because of steadily increasing disk capacity. With 146GB FC drives already shipping and 292GB FC drives expected soon, tracking the RAID array configuration of each controller and optimally placing data on these disk drives becomes more of an art than a science. But even though disk capacities are growing, disk speeds have reached a 15,000 rpm plateau--exceeding this speed results in the outer edges of the disk spinning faster than the speed of sound, which will damage the disk.
If a new disk is installed, to optimize the performance of the array you need to redistribute the existing data on the storage array to this new disk. This is easier said than done because once you add new controllers with new disk to the array, all new and more frequently accessed data gets written and stored on this new disk that could have lower seek times and could potentially become the bottleneck in your system. Compounding the problem, as disk arrays age, storage manufacturers stop offering older and potentially better performing disk media because they have lower capacities but higher speeds. So rather than one controller with a faster disk housing 200GB of infrequently accessed data, a new controller now houses higher capacity drives that either spin at a lower rpm, or take longer to retrieve data because of the increased times required to place and retrieve the greater amounts of data on the drives.
Many array vendors tell users they can buy the disk capacity they need now and add more disk when they need it. The degree of difficulty in managing this scenario depends on the number and types of hosts attached to your array, and the types of tools available to manage the storage system. If an administrator understands the performance characteristics of existing disk drives, can manage the volumes from the host level and can unobtrusively move the data within the array, this approach works fine. But new data is often placed on the new disks without reorganizing and optimizing the data within the array, creating performance problems. And when new disks are added to the array, it's important to assign the correct RAID level. It only takes one mental lapse or incorrectly set policy for the wrong data to be allocated to the new disks or an incorrect RAID configuration to be assigned to the new disks.
Storage companies employ a number of methods to address these issues. Both EMC and EqualLogic have so-called self-healing software in their arrays that examines the workload and performance of the disks. When the application detects a hot spot on a disk, it identifies another area in the same array where it will be less likely to be overtaxed and moves the data from the problem area to the new location.
Some vendors offer the ability to change RAID configurations on the fly. For instance, IBM's Shark allows administrators to change the RAID setting on an individual controller from RAID 5 to RAID 10 or vice versa. You may want to use this approach if you're not getting the sort of performance you expect with your existing RAID configuration.
However, you need to take some precautions when using these types of tools. For example, any data on the disks behind a controller whose RAID configuration has been changed will be destroyed. And the tool may not warn you whether any of the volumes on that controller are assigned and in use, so you should verify that all of the volumes are unassigned and the data is no longer needed before changing the RAID setting. Finally, for administrators who spread their data across multiple back-end controllers, it becomes challenging to free up all of the disks behind a particular controller so that the RAID configuration can be changed, especially if multiple hosts with different operating systems are attached to that array.
When using mixed RAID, keep a close eye on how data gets dispersed across the back-end controllers and the size of the disks you plan to add. These two elements will have a huge impact on the performance of the array.
About the author
J erome M. Wendt (email@example.com) is a storage analyst specializing in the field of open-systems storage and SANs. He has managed storage for small- and large-sized organizations in this capacity.