DS6000 - The specs

    Requires Free Membership to View

They might not be called sharks, but IBM Corp.'s two new arrays will make the storage waters a bit more perilous for its competitors. Big Blue has eschewed the shark moniker for its new TotalStorage DS6000 and DS8000 arrays; but make no mistake, their underlying architecture and scalability are definitely predatory.

The two new systems, while different in many respects, have enough software in common to provide a powerful one-two tiered storage punch. In developing successors to its ESS storage line, IBM borrowed heavily from its experience building high-performance servers. Both new storage boxes are powered by proven server processors--the PowerPC 750GXfor the midrange TotalStorage DS6000, and the Power5 processor in the enterprise-class TotalStorage DS8000. Key benefits include:

 

  • Shared software and compatibility with current Sharks to facilitate data movement among tiers
  • Amodular approach to scalability
  • The DS6000's small footprint
  • Processor-based partitioning in the DS8000

Acursory look at the spec sheets for these systems won't blow you away. Storage capacities, connectivity and other tale-of-the-tape numbers are healthy, but can't compare to the stratospheric stats of Hitachi Data Systems' (HDS) new TagmaStore boxes. But IBM's new systems are the first leg of a product roadmap that leverages new chips, cache algorithms and advanced server partitioning to deliver enhanced scalability for capacity, performance and manageability. "The new box is going to take them forward because they're using the new technology chips," says Joe Furmanski, technical project director at the University of Pittsburgh Medical Center (UPMC), "and that's going to make an order of magnitude difference."

IBM touts the DS6000 as offering enterprise-level performance at a midrange price. The modular design allows the array to scale up to 224 disks and a maximum capacity of 67.2TB. It has eight host and eight storage ports, and can attach to open systems and mainframe hosts. The ability to connect to mainframes is unusual for a system positioned as a midrange product. The new enterprise entry, the DS8000, can house up to 192TB of storage. Initially, two models will be offered: the DS8100 featuring a dual two-way processor configuration and a maximum capacity of 115TB, and the DS8300 with a dual fourway configuration and support for up to 192TB of storage.

Leveraging server technology
The DS6000 and DS8000 are both built around IBM server technology, and while the processors and controllers are different, both systems are highly modular and are easy to scale upward. They will be available in December, with full production in the first quarter of 2005.

IBM also implemented its LightPath Diagnostics system on both arrays. LightPath appeared several years ago on IBM's xSeries servers; it's an onboard diagnostic system with a series of LED indicators that are used to monitor performance and predict component failures. This marks the first time IBM has used LightPath in a storage system. The DS8000 takes diagnostics a step further by exploiting the Power5 chip's self-healing capabilities that can make adjustments to avoid operational failures.

Management of the two arrays is consistent because of the high degree of software commonality. The code base that runs the systems is nearly identical, with the DS6000 using about 97% of the code that powers the DS8000, providing easy interoperability between the two new models. Interoperability extends backwards, to older ESS arrays that new DS units will coexist with and ultimately replace. "About 75% of the code that runs on the DS8000 is the same as that on the ESS800 today," says Mike Hartung, an IBM fellow and an architect of the DS systems.

By maintaining considerable code compatibility with each other and ESS boxes, DS models will use IBM datamoving tools, such as XRC, FlashCopy, Global Copy and Metro/Global Mirror. This allows for the copying of data among any mix of DS6000s, DS8000s and ESS models.

Nancy Hurley, senior analyst at the Enterprise Strategy Group (ESG), says this "very cohesive software family allows you to smoothly integrate between the two, and to use them in a tiered storage environment."

A new caching algorithm developed at IBM's Almaden Research Center is used in both boxes. Adaptive Replacement Cache (ARC) builds on the Least Recently Used (LRU) cache algorithm prevalent in most storage systems and used for years in server architectures. ARC effectively melds LRU with another caching technique, Least Frequently Used (LFU), to dynamically balance the "recently" and "frequently" criteria to improve cache hit ratios. Hartung says the ARC algorithm is so effective that under some test workloads it made the cache appear to be twice its actual size. ARC also does a better job of cache management, especially in segregating random and sequential reads.

"The ARC cache management notices a sequential process and keeps it from flooding all the memory," says Bob Venable, manager of enterprise systems at BlueCross BlueShield (BCBS) of Tennessee in Chattanooga.

Another shared trait is host support. Both systems can attach to mainframes or open-systems servers. For the high-end DS8000, mainframe support is to be expected, but for the DS6000, mainframe support in an array positioned as a midrange box is a bit unusual, although EMC's DMX800 offers this support, too.

This was first published in November 2004

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: