Primary storage technologies: HP and HDS ponder dedupe, compression

Vendors such as HP and HDS are in no hurry to offer dedupe and compression for their primary storage technologies; cite data reduction complexity, overhead.

Two major storage vendors -- Hewlett-Packard (HP) Co. and Hitachi Data Systems (HDS) -- appear in no great hurry to offer data reduction for their main primary storage technologies.

HP is expanding its StoreOnce backup deduplication software for other uses, and HDS OEM partner BlueArc Corp. said it will include primary storage technologies from Permabit Technology Corp. But data reduction in the two vendors' SAN arrays appears less imminent.

Company execs said HP StoreOnce will eventually extend to the StorageWorks X9000 scale-out NAS product. But Lee Johns, director of product marketing for HP StorageWorks, said making dedupe available for primary storage is low on the priority list. He said HP will first make StoreOnce available on a scalable, multinode backup appliance, with HP Data Protector backup and recovery software, and on virtual machines for remote site deployment.

"It's a technology that inherently injects more complexity into managing your primary storage, and that's why we haven't done it yet," Johns said. "You really want to make technologies like deduplication invisible to the customer."

Johns said the deduplication technology will extend to virtual machines next year, allowing users to allocate a small amount of storage for dedupe purposes and then replicate the data to an HP StoreOnce appliance without having to worry about rehydrating the data.

"It's not primary storage, but it is closer to the source of the data," Johns said. "We're committed in 2011 to deliver deduplication closer to the data."

Claus Mikkelsen, chief scientist at HDS, said he prefers to focus on the larger subject of "capacity efficiencies," of which dedupe is but one small piece among a large collection of technologies that includes thin provisioning and RAID configurations.

"North of 95% of our arrays are formatted in a RAID 5 or RAID 6 configuration. You see very little RAID 10 in a Hitachi array because of the way we do distributed offset parity," Mikkelsen said. "It's a big difference in capacity in those two techniques."

Deduplication is great for backups and archives, he added, noting that HDS plays in both spaces. The Hitachi Content Platform for instance, offers compression and single-instance storage. But dedupe and compression for primary storage systems presents a greater challenge.

"The problem with compression is it has to be decompressed, and there's a lot of overhead with compression," Mikkelsen continued. "There's a couple of different ways to do dedupe. One of them is a background task at rest. The other one is inline."

Hu Yoshida, HDS' chief technology officer, said with the way dedupe and compression are handled, "you have to have a little bit more knowledge than you have on a block device; that takes file type of knowledge.

"It's really a function of a file system," he continued. "And until those types of file systems or applications give information to the storage device -- which I think is going to be a trend -- the block storage device is not able to handle that, because one day you do a compression or a dedupe and it's this size record. The next day, if it's an active record, it's another size and it doesn't fit."
This was first published in November 2010

Dig deeper on Primary storage capacity optimization

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close