Storwize adds HA, Compression Accelerator to primary storage data reduction device

Storwize's compression appliances, which sit inline in front of primary network attached storage, can now be paired for HA and compress existing data on legacy systems.

Storwize Inc. today rolled out updates to its STN series of primary file storage data reduction products, adding high-availability (HA) features and reporting on capacity and compression ratios plus the ability to reclaim space by compressing customer data stored on legacy systems.

Storwize now lets customers set up a high-availability configuration between devices, with automated failover and synchronized mirroring between the two nodes of the compression configuration. Previously, Storwize offered active-passive failover options.

The new Storwize Capacity Analyzer is a storage resource management (SRM) utility that reports on storage consumption and access, historical and predicted data growth, and data reduction trends for the entire storage environment, or at the file, share or directory level.

More on primary data reduction
Storwize claims updated data compression app improves throughput performance

Cornell University, Shopzilla deploy primary storage data reduction to consolidate storage, keep up with data growth

Storwize claims good data compression rates, no performance degradation on STN-6000 appliance

Primary storage data reduction: Data deduplication and compression tools
Previously, Storwize would only apply the vendor's compression algorithms -- based on standard Lempel Ziv (LZ) compression -- to new data passed through the device on its way to back-end storage. With the new Compression Accelerator, the same Storwize appliance used for new data can also compress existing data in the environment, according to Storwize VP of technical strategy Steve Kenniston. Storwize claims customers can compress 10 TB per day per single device, and 20 TB in HA configurations.

Taneja Group founder and consulting analyst Arun Taneja said because Storwize devices sit in the data path, active-active HA will be a key feature to attract enterprise customers. "As an IT guy, I will not let you in if you're inline and have a performance impact, and you better give me the HA capability, too," he said.

Storwize claims more than 100 customers for its devices, including blue chip references listed in its press briefing materials that include Chevron, Polycom, PMC-Sierra, GE, Texas Instruments and Mazda.

"This topic is hot and getting hotter," Taneja said. "It's clear to me that you want to reduce the size of your data at the earliest opportunity and if you do that, all downstream stuff becomes more capacity-efficient."

Storwize claims its Random Access Compression Engine (RACE), which reserves free space within a compressed file to hold modified blocks when the file is changed, not only allows it to maintain random I/O performance, but also helps it better integrate with downstream data deduplication devices for backup or primary storage, like NetApp's deduplication for FAS filers.

Kenniston said this is because Storwize compresses the data and inserts modified blocks the same way every time. That means edits to a file won't make it look new to the block level deduplication system, speeding processing time as data is sent to the backup layer of the infrastructure. Block deduplication devices can also provide additional capacity reduction by deleting redundant blocks.

Taneja said some customers will still prefer out-of-band approaches to data reduction given that it's a relatively new technology. However, the tradeoff is that out-of-band devices such as Ocarina's perform data compression post-process, meaning the full disk storage capacity still has to be available as a landing area for undeduplicated data.

Right now, data deduplication is still fragmented within the data center infrastructure, with different companies and products providing the feature for different types of backup and primary storage. Taneja said the ultimate vision could be a single product that can handle data reduction from one end of the lifecycle to another.

"I'm not sure if there's a technology that could straddle both, but technologists I talk to tell me that there is," he said.

However, Kenniston said Storwize does not take that stance. "We never say never, but it gets back to the fact that data deduplication is better for backup because there's a lot of repetitive data, and it can impact performance," he said.

Dig deeper on Primary storage capacity optimization

Pro+

Features

Enjoy the benefits of Pro+ membership, learn more and join.

0 comments

Oldest 

Forgot Password?

No problem! Submit your e-mail address below. We'll send you an email containing your password.

Your password has been sent to:

-ADS BY GOOGLE

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

Close