Pro+ Content/Storage magazine

Thank you for joining!
Access your Pro+ Content below.
Vol. 9 Num. 6 September 2010

Primary storage dedupe: Requisite for the future

Tools like automated tiering and thin provisioning help users cope with capacity demands; but more drastic measures, like primary storage data reduction, are needed. Ten years ago, 10 TB was considered a large storage environment. Now it's common to have hundreds of terabytes and there are even environments with petabytes of storage in the double-digit range. It's safe to assume that data storage capacity growth will continue over the next 10 years as storage environments measured in exabytes begin to emerge and, over time, become mainstream. I actually talked to one customer who claimed they would have an exabyte of data in the next three years. Having that much physical storage in the data center is ultimately untenable. So how do we solve the problem? A big part of the answer will be provided through a number of technologies. Hard disk drives will continue to become denser. Higher capacity disk drives have the ability to store more data within the same given physical space. However, fatter disk drives impact application ...

Features in this issue

  • Virtualizing NAS

    Companies of all sizes are being inundated with unstructured data that's straining the limits of traditional file storage. File virtualization can pool those strained resources and provide for future growth.

Columns in this issue

  • ILM lives again!

    Information lifecycle management faded into oblivion without getting serious notice. But it's back now, with a new name and more realistic goals.

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

-ADS BY GOOGLE

Close