Storwize Inc. today rolled out updates to its STN series of primary file storage data reduction products, adding high-availability (HA) features and reporting on capacity and compression ratios plus the ability to reclaim space by compressing customer data stored on legacy systems.
Storwize now lets customers set up a high-availability configuration between devices, with automated failover and synchronized mirroring between the two nodes of the compression configuration. Previously, Storwize offered active-passive failover options.
The new Storwize Capacity Analyzer is a storage resource management (SRM) utility that reports on storage consumption and access, historical and predicted data growth, and data reduction trends for the entire storage environment, or at the file, share or directory level.
Taneja Group founder and consulting analyst Arun Taneja said because Storwize devices sit in the data path, active-active HA will be a key feature to attract enterprise customers. "As an IT guy, I will not let you in if you're inline and have a performance impact, and you better give me the HA capability, too," he said.
Storwize claims more than 100 customers for its devices, including blue chip references listed in its press briefing materials that include Chevron, Polycom, PMC-Sierra, GE, Texas Instruments and Mazda.
"This topic is hot and getting hotter," Taneja said. "It's clear to me that you want to reduce the size of your data at the earliest opportunity and if you do that, all downstream stuff becomes more capacity-efficient."
Storwize claims its Random Access Compression Engine (RACE), which reserves free space within a compressed file to hold modified blocks when the file is changed, not only allows it to maintain random I/O performance, but also helps it better integrate with downstream data deduplication devices for backup or primary storage, like NetApp's deduplication for FAS filers.
Kenniston said this is because Storwize compresses the data and inserts modified blocks the same way every time. That means edits to a file won't make it look new to the block level deduplication system, speeding processing time as data is sent to the backup layer of the infrastructure. Block deduplication devices can also provide additional capacity reduction by deleting redundant blocks.
Taneja said some customers will still prefer out-of-band approaches to data reduction given that it's a relatively new technology. However, the tradeoff is that out-of-band devices such as Ocarina's perform data compression post-process, meaning the full disk storage capacity still has to be available as a landing area for undeduplicated data.
Right now, data deduplication is still fragmented within the data center infrastructure, with different companies and products providing the feature for different types of backup and primary storage. Taneja said the ultimate vision could be a single product that can handle data reduction from one end of the lifecycle to another.
"I'm not sure if there's a technology that could straddle both, but technologists I talk to tell me that there is," he said.
However, Kenniston said Storwize does not take that stance. "We never say never, but it gets back to the fact that data deduplication is better for backup because there's a lot of repetitive data, and it can impact performance," he said.