Feature

HDS reinvents high-end arrays

Ezine

This article can also be found in the Premium Editorial Download "Storage magazine: A look inside Hitachi's TagmaStor high-end arrays."

Download it now to read this article plus other related content.

Operating systems
supported by USP

    Requires Free Membership to View

Storage pools
The virtualization part of the TagmaStore story is just as compelling, and in the future, it may be even more important than the spec sheets for the USP boxes. Any of the three models can virtualize up to 32PB of external storage. While 32PB may seem like a wildly outlandish number, HDS insists that's it's within the realm of reality. But with support for about 16,000 addresses, Mikkelsen says, "You'll run out of addressing capability before you run up against the 32PB limitation," as 32PB is a lot of storage to virtualize. IBM's SAN Volume Controller (SVC) offers similar virtualization features, and currently supports storage from other vendors, something HDS says the TagmaStore boxes will do in subsequent releases. SVC running in a Cisco MDS 9000 series director has an upward limit of 2PB of storage that it can pool, but in a recent webcast, a Cisco spokesman said that the 2PB is more a theoretical, than practical, limit.

In HDS' virtualization scheme, all back-end--or virtualized--storage would be represented as USP LUNs, and all connected storage would be managed from a single HiCommand console.

While the USP systems can pick up configuration and volume information from external storage, HDS says the process will be more of a volume migration between the external storage and USP platforms.

For more efficient administration of large installations, up to 32 Virtual Private Storage Machines can be created to carve out more manageable chunks of the pooled storage to allow multiple system administrators to oversee segments of the entire pool. Overall administration can still be centralized, but each administrator of a virtual machine would be able to manage physical disk space, cache and front-end ports. This capability may be especially useful when consolidating storage among divisions or of acquired companies.

"Having external virtualization provided by a storage system allows for another tier of storage, and a system that can act as storage and a virtualization platform allows for even greater consolidation," says ESG's Asaro.

Because all of the back-end storage connects to the USP system, the latest HDS software tools would be available to manage that external storage, effectively upgrading the management capabilities of the attached boxes. All writes--whether directed at the USP storage or external storage--are cached in the USP array. Because the USP system will likely be the best performer among the pooled systems in this scenario, HDS says that write-access performance to the external storage should actually improve, especially for SATA/ATA disks, which could approach the performance of Fibre Channel (FC) drives. "If you're using [SATA drives] for lifecycle management, probably 98% of the I/O that goes to the SATA drives are writes, and writes go to the [USP] cache," says Mikkelsen. If the back-end box already uses write caching, it will work in conjunction with the USP system.

As more external storage is added to the USP's virtual pool, the USP's performance will be affected. The type of storage will have a bearing on performance as well. "If you start putting a lot of active storage out there, let's say you backend it with 9980Vs or DMXs," says Mikkelsen, "that's going to be eating into the bandwidth of the [USP]."

An upcoming enhancement to HDS' Data Retention Utility, formerly called LDEV, will make it possible to lock down retention data without having to format the disk it resides on as write once, ready many (WORM). Data Retention Utility makes WORM an attribute of the data, so the retention policies set for the data will follow it wherever it's moved within the USP-managed pool of storage.

Reads to all connected storage are cached, using read-ahead technology to enhance performance throughout the pool. While initial read requests have to wait for disk access, read-ahead anticipates additional requests and accesses appropriate blocks of data which can then be accessed at memory speed. With this method, there may be a performance improvement with connected SATA/ATA disks; for FC disks, HDS expects any performance hit to be minimal at best. Read-ahead technology is commonly used in many vendors' storage arrays.

The TagmaStore arrays also add cache partitioning, which was not a feature of the Lightning line. Partitioning cache helps ensure that unruly applications don't overrun cache allocated to other users and adversely affect their performance. HDS says that cache partitioning will make it possible to guarantee quality of service throughout that cache.

This was first published in September 2004

There are Comments. Add yours.

 
TIP: Want to include a code block in your comment? Use <pre> or <code> tags around the desired text. Ex: <code>insert code</code>

REGISTER or login:

Forgot Password?
By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy
Sort by: OldestNewest

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to: