As this Special Report reveals, midrange arrays offer a wide choice of features that can fulfill almost every storage need. Because the choices are so varied in the midrange array category, some homework is required to find the right array at the right price for specific storage requirements. Applications will dictate your requirements, but a midrange array can offer the best fit whether an application requires low-cost, high-capacity disk, the ability to mix and match different kinds of disk and RAID within an array, replication between monolithic and midrange arrays or the ability to virtualize other storage arrays.
Disk drive support
A compelling force behind the deployment of midrange arrays is their ability to support disk drives of almost any type, size or number. The lowest-priced configurations are approximately $5,000 per terabyte and show up in midrange arrays, such as EqualLogic Inc.'s PS200E and Isilon Systems Inc.'s IQ 2250, that support only SATA drives. Products like 3PAR Inc.'s InServ S400, IBM's TotalStorage DS6800 and Sun Microsystems Inc.'s StorEdge 6920 support only high-performance Fibre Channel (FC) drives, and are priced at around $50,000 per terabyte. Still other companies, like Hitachi Data Systems (HDS) Corp., offer arrays that support either SATA or FC drives.
The trend, however, is for midrange arrays to support both high-performance FC drives and high-capacity SATA drives behind the same front-end host interface. Vendors such as EMC Corp. and HDS -- that initially supported only FC disk drives in their Clariion and Thunder series arrays, respectively -- now include SATA support within these arrays and allow users to mix FC and SATA disk drives on the same system. 3PAR, IBM and Sun, who don't currently support SATA drives within their midrange systems, plan to add that support this year.
The ability to mix high-performance FC and high-capacity SATA drives within a midrange array gives administrators the flexibility to put the right data on the right kind of disk. For example, FC disk drives are designated for a high-performance database or file system apps, while the high-capacity, lower-cost SATA disk drives are used for apps calling for disk-to-disk backup, snapshots, virtual tape libraries, e-mail archives and fixed content.
Because different capacities and speeds exist for high-performance FC disk drives, users need to weigh several factors to come up with their best choice. IBM says that as a general rule, the smaller and faster the disk drive, the better the performance. HDS finds that 15,000 rpm FC disks will provide up to a 15% increase in performance over 10,000 rpm FC disks in random read environments.
One way to improve performance on slower disks is to distribute the data volumes across multiple disks. Chris Berthaut, the open-systems storage team manager with Hibernia National Bank in New Orleans, says his team uses this feature on the nine Xiotech Corp. Magnitude Classics they manage. "The virtualization [feature] was a big factor in the decision to buy Xiotech arrays since it allowed us to easily stripe data across disks on their arrays," says Berthaut.
A final area that may be overlooked is how the array recovers from failed disk drives and how easy it is to replace the faulty drives. Many midrange arrays, including HDS' Thunder 9520V, have a "call home" feature that reports a disk drive failure or a drive that's on the verge of failing. HDS reports that approximately 95% of the disk drives replaced on HDS' Thunder arrays "soft fail"--the disk exceeds an error threshold and is swapped out before the disk physically fails. Thunder copies the data from the poorly performing disk to a spare in the system. This approach minimizes performance degradation and speeds up recovery time since a copy operation runs faster than a RAID 5 rebuild.
The type of RAID controller will determine three major array functions: Performance, disk drive management, and RAID levels
Storage Technology Corp. (StorageTek), for example, uses different controllers on various models depending on application requirements. If performance isn't a major concern, users should consider StorageTek's FlexLine FLA200 model that uses Fibre Channel arbitrated loop (FC-AL) controllers to connect to the disk drives. Conversely, if performance is the primary driver, the FlexLine FLA300 model enables a point-to-point or switched connection to back-end disk using a switched bunch of disk (SBOD) architecture.
The SBOD architecture also provides users with benefits beyond enhanced performance. The SBOD architecture in Hewlett-Packard (HP) Co.'s EVA5000 allows both of its controllers to connect to both ports on all of the disk drives for improved redundancy. It also provides fan-out and isolation between the controllers and disk drives, which makes fault isolation, repair and expansion easier. IBM's TotalStorage DS6800 takes SBOD even further, providing four data paths to every disk drive. The point-to-point connection also enables the array to identify when an individual disk drive starts to fail, something more difficult to do in an FC-AL implementation, and the failure of one RAID controller doesn't affect server and data availability.
The RAID controller also determines the RAID levels the array will support. With nearly every array on the market supporting RAID 1 and RAID 5 configurations, the importance of this feature comes into play for shops that need a specific RAID level to support a particular application. For instance, using RAID 10 in conjunction with a high-performance database should further enhance performance. Similarly, using either RAID 10 or 50 with SATA disk drives will improve performance and provide a higher level of protection in the event of a disk failure even though these configurations impose a large capacity usage penalty.
Vendors like Nexsan Technologies and Xiotech Corp., which support a large number of SATA configurations, are looking forward to the formal introduction of RAID 6 later this year. RAID 6 resembles RAID 5, but it uses two disks for parity. This new RAID configuration suits SATA disk drives particularly well because it allows two disks to fail without any data loss and incurs less of a capacity penalty than a mirrored disk configuration; it also provides a higher level of protection than RAID 5.
Cache and ports
There's a significant variance in the amount of cache in midrange arrays vs. their monolithic counterparts. While cache support varies from no cache on Xiotech Magnitude 3D systems to 80 GB on a fully configured 3PAR InServ S800 Storage Server, the average cache amount on midrange arrays is 8 GB vs. 64 GB or greater on monolithic arrays.
Midrange arrays need less cache for two reasons: The I/O of apps running on Unix and Windows OSes tends to be more random than sequential, which generates more queries to disk. As a result, installing more cache in the midrange system generates only a marginal performance increase because the queries still need to go directly to disk.
The second reason for the reduction in cache is that the I/O block sizes generated by Unix and Windows applications tend to be either 4 KB or 8 KB. Unlike some monolithic arrays that carve out their cache sizes in 32 KB blocks, midrange arrays break their cache into either 4 KB or 8 KB blocks. This allows the smaller cache sizes on midrange arrays to act as efficiently as the larger cache sizes on the monolithic arrays, because all of the cache in each block of the midrange array is used.
The number of front-end FC connections supported by midrange arrays ranges from one to eight, although the majority of array vendors say four ports are sufficient for most applications. Assuming a 2 gigabytes per second (Gbps) FC connection, throughput only becomes an issue for the most performance-intensive apps or when a large number of servers (more than 10) access the same port on the array.
While having the option to mix-and-match disk types on the same array sounds appealing, storage admins need to be aware of some of the downsides of this approach. For instance, a batch job that archives old e-mails from FC to SATA disks may start at the same time that a highly visible production OLTP database needs to execute reads and writes to the disk. With the data potentially spread across multiple disks on different controllers and the applications sharing the storage processor and cache, contention for the same resources could arise. This creates an unpleasant situation in which the production OLTP application will slow down as both jobs contend for the same resources. It's also important to keep an eye on which servers are using which array ports, so backup jobs running at the same time don't overwhelm the same port with too much traffic.
Sun Microsystems Inc.'s StorEdge 6920 and other midrange arrays address these issues through logical partitioning (LPAR). LPAR lets storage admins carve up the storage array's memory and processing power and then assign it to specific servers. This way, even if an e-mail archiving batch job kicks off in the middle of the day, it can use only the memory and processing power allocated to it.
Hibernia National Bank minimizes contention issues by deploying different arrays. The bank's Windows/Novell group uses only Xiotech arrays for its file and print services, while the Unix group uses an IBM FAStT 900 (now the DS4500) that it finds is better suited for its applications' performance requirements. This approach also helped to isolate technical problems and alleviate political problems.
Sophisticated volume management software on midrange arrays is approaching the functionality found with monolithic arrays. In addition, every midrange array comes with software that lets administrators monitor, analyze, manage or tune the performance of the array with varying levels of granularity. For example, the base module of StorageTek's SANtricity software suite lets users: update controller firmware non-disruptively, migrate RAID levels dynamically, add and configure new drive modules, manage a system with mixed FC and SATA disks and monitor and tune performance
It's important to determine how many arrays a particular vendor's software will manage. Not having to learn how to use different management programs for all the different arrays in the data center saves considerable time. HDS, for example, extends the same level of software support it offers on its arrays to other vendors' midrange arrays by licensing and rebranding AppIQ Inc.'s StorageAuthority Suite as its HiCommand Storage Services Manager.
For any midrange array that will present virtual volumes or logical unit numbers (LUNs) to multiple servers on the same front-end FC port, volume management software is a must. While every array vendor offers this functionality in some capacity, there are differing degrees of management flexibility. The following volume management features should be considered essential:
The ability to group two or more volume groups and present them as one large volume makes the most sense for offloading volume management from the server to the array. Offloading the volume management to the array lets storage admins working with heterogeneous server OS environments learn only one volume management interface. It can also eliminate the need to buy third-party, server-level volume management software. Nearly every midrange array can accomplish the offloading of this task, but each array uses different methods; users need to be cautious about changing existing volume configurations.
For example, EMC's Navisphere Management Suite allows a user to create volume groups, or what EMC calls metaLUNs, on its Clariion array. These metaLUNs can be created in either a striped or concatenated format from existing LUNs. Each option presents benefits and drawbacks. The striped feature provides better performance because data is striped across all of the LUNs in the metaLUN, although all of the LUNs in a striped metaLUN must be of the same size, RAID level and disk type. A concatenated LUN must also be composed of disks of the same type (FC or SATA) and RAID level, but concatenation allows individual LUNs of different sizes to be joined.
In addition, the maximum size of a metaLUN is restricted by the size of the individual LUNs and the type of Clariion array. For example, metaLUNs on the CX600 can comprise up to 16 LUNs, while only eight LUNs can be used for metaLUNs on the CX400 and CX200 models.
Shops that opt to offload volume management from the server to the array will also need the ability to extend or grow these volume groups as they fill to capacity. Midrange arrays from 3PAR, HP, EMC and other vendors provide dynamic volume group growth, but each handles it differently. 3PAR allows administrators to grow a volume by increasing it to the precise size desired. HP's EVA can start a volume at any size, ranging from 1 GB to 2 TB, and then grow the volume in 1 GB increments.
EMC permits the dynamic growth of metaLUNs, but the process differs depending on how the metaLUN was created. If a LUN is added to a metaLUN that was created in a striped manner, the Clariion will re-stripe the existing data across all of the LUNs now in the metaLUN. With a concatenated metaLUN, when a LUN is added to it, it gets appended to the end of the existing string of LUNs in the metaLUN with new data put on the new LUN in the group. With a concatenated metaLUN, data isn't automatically redistributed across this new configuration of LUNs.
The final step in growing virtual volumes is to configure the host server OS to discover the new size of the virtual volume. Some OSes such as Windows Server 2003 can do so dynamically, but users should exercise extreme caution by testing this functionality first and assuming any volume expansion will necessitate a reboot or, minimally, a rescan of the expanded volume to discover the additional capacity. Administrators should also check with the vendor to see how their OS handles the dynamic growth of volumes. Some array vendors report that data loss can occur if an OS doesn't recognize dynamic volume expansions.
Snaps and mirrors
Driven by SATA drives, shrinking backup windows and the need to create in-house disaster recovery procedures, array-based snapshot and mirroring are becoming common. Hibernia's Berthaut uses the synchronous mirroring capabilities of Xiotech's Magnitude Geo-Replication Services software between sites in New Orleans and Shreveport, La., with a high degree of success.
Berthaut began using the synchronous mirroring feature as a stopgap measure because he was unsure how successful this approach would be due to the 700-mile roundtrip distance between the two sites.
"The Windows and Novell servers are extremely tolerant of the latency, but I'm still looking for a more acceptable asynchronous solution," says Berthaut. "I am currently evaluating Xiotech's TimeScale rapid restore appliance, an asynchronous mirroring product, and I plan to use it to replace the current synchronous mirroring process." As midrange arrays take on more monolithic attributes, users should look to deploy midrange arrays for more mission-critical storage applications. They offer low-cost disk, high levels of performance and availability, easy to use software and effective replication technologies.
About the author: Jerome M. Wendt (email@example.com) is a storage analyst specializing in the fields of open-systems storage and SANs. He has managed storage for small- and large-sized organizations in this capacity.
Alex Barrett, Rich Castagna, Jo Maitland and Alan Radding also contributed to this Special Report.