Feature

Spotlight on midrange arrays

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: Exploring the most innovative midrange systems."

Download it now to read this article plus other related content.

Monolithic vs. midrange: What's really different?
With midrange arrays rapidly closing the gaps that exist between them and monolithic arrays, certain key differences remain:
Mainframe support. Midrange arrays usually don't support either mainframe ESCON or FICON connectivity, but this is changing because EMC and IBM offer FICON connectivity on their upper-end midrange arrays.

    Requires Free Membership to View

Skill sets. Administrators generally need to be more technically proficient to manage a monolithic array.
Replication services. Monolithic arrays support highly specialized replication features, such as consistency groups to ensure that a consistent point-in-time copy of application data exists before any further writes are written to the source data.
Availability and reliability. Monolithic arrays provide higher availability and reliability for the most demanding applications.
Known issues. Microcode bugs appear even in the best of arrays, especially those used in heterogeneous environments. Bugs in monolithic arrays tend to be better documented, easier to identify and corrected faster than those in midrange code.
RAID controllers
The type of RAID controller will determine three major array functions:

  • Performance
  • Disk drive management
  • RAID levels

Storage Technology Corp. (StorageTek), for example, uses different controllers on various models depending on application requirements. If performance isn't a major concern, users should consider StorageTek's FlexLine FLA200 model that uses Fibre Channel arbitrated loop (FC-AL) controllers to connect to the disk drives. Conversely, if performance is the primary driver, the FlexLine FLA300 model enables a point-to-point or switched connection to back-end disk using a switched bunch of disk (SBOD) architecture.

The SBOD architecture also provides users with benefits beyond enhanced performance. The SBOD architecture in Hewlett-Packard (HP) Co.'s EVA5000 allows both of its controllers to connect to both ports on all of the disk drives for improved redundancy. It also provides fan-out and isolation between the controllers and disk drives, which makes fault isolation, repair and expansion easier. IBM's TotalStorage DS6800 takes SBOD even further, providing four data paths to every disk drive. The point-to-point connection also enables the array to identify when an individual disk drive starts to fail, something more difficult to do in an FC-AL implementation, and the failure of one RAID controller doesn't affect server and data availability.

The RAID controller also determines the RAID levels the array will support. With nearly every array on the market supporting RAID 1 and RAID 5 configurations, the importance of this feature comes into play for shops that need a specific RAID level to support a particular application. For instance, using RAID 10 in conjunction with a high-performance database should further enhance performance. Similarly, using either RAID 10 or 50 with SATA disk drives will improve performance and provide a higher level of protection in the event of a disk failure even though these configurations impose a large capacity usage penalty.

Vendors like Nexsan Technologies and Xiotech Corp., which support a large number of SATA configurations, are looking forward to the formal introduction of RAID 6 later this year. RAID 6 resembles RAID 5, but it uses two disks for parity. This new RAID configuration suits SATA disk drives particularly well because it allows two disks to fail without any data loss and incurs less of a capacity penalty than a mirrored disk configuration; it also provides a higher level of protection than RAID 5.

Cache and ports
There's a significant variance in the amount of cache in midrange arrays vs. their monolithic counterparts. While cache support varies from no cache on Xiotech Magnitude 3D systems to 80GB on a fully configured 3PAR InServ S800 Storage Server, the average cache amount on midrange arrays is 8GB vs. 64GB or greater on monolithic arrays.

Midrange arrays need less cache for two reasons: The I/O of apps running on Unix and Windows OSes tends to be more random than sequential, which generates more queries to disk. As a result, installing more cache in the midrange system generates only a marginal performance increase because the queries still need to go directly to disk.

The second reason for the reduction in cache is that the I/O block sizes generated by Unix and Windows applications tend to be either 4KB or 8KB. Unlike some monolithic arrays that carve out their cache sizes in 32KB blocks, midrange arrays break their cache into either 4KB or 8KB blocks. This allows the smaller cache sizes on midrange arrays to act as efficiently as the larger cache sizes on the monolithic arrays because all of the cache in each block of the midrange array is used.

The number of front-end FC connections supported by midrange arrays ranges from one to eight, although the majority of array vendors say four ports are sufficient for most applications. Assuming a 2Gb/sec FC connection, throughput only becomes an issue for the most performance-intensive apps or when a large number of servers (more than 10) access the same port on the array.

While having the option to mix-and-match disk types on the same array sounds appealing, storage admins need to be aware of some of the downsides of this approach. For instance, a batch job that archives old e-mails from FC to SATA disks may start at the same time that a highly visible production OLTP database needs to execute reads and writes to the disk. With the data potentially spread across multiple disks on different controllers and the applications sharing the storage processor and cache, contention for the same resources could arise. This creates an unpleasant situation in which the production OLTP application will slow down as both jobs contend for the same resources. It's also important to keep an eye on which servers are using which array ports, so backup jobs running at the same time don't overwhelm the same port with too much traffic.

Sun Microsystems Inc.'s StorEdge 6920 and other midrange arrays address these issues through logical partitioning (LPAR). LPAR lets storage admins carve up the storage array's memory and processing power and then assign it to specific servers. This way, even if an e-mail archiving batch job kicks off in the middle of the day, it can use only the memory and processing power allocated to it.

Hibernia National Bank minimizes contention issues by deploying different arrays. The bank's Windows/Novell group uses only Xiotech arrays for its file and print services, while the Unix group uses an IBM FAStT 900 (now the DS4500) that it finds is better suited for its applications' performance requirements. This approach also helped to isolate technical problems and alleviate political problems.

This was first published in March 2005

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: