Pro+ Content/Storage magazine

Thank you for joining!
Access your Pro+ Content below.
February 2014 Vol. 12 No. 12

External storage might make sense for Hadoop

Using Hadoop to drive big data analytics doesn't necessarily mean building clusters of distributed storage -- good old external storage might be a better choice. The original architectural design for Hadoop made use of relatively cheap commodity servers and their local storage in a scale-out fashion. Hadoop's original goal was to enable cost-effective exploitation of data that was previously not viable. We've all heard about big data volume, variety, velocity and a dozen other "v" words used to describe these previously hard-to-handle data sets. Given such a broad target by definition, most businesses can point to some kind of big data they'd like to exploit. Big data is growing bigger every day and storage vendors with their relatively expensive SAN and network-attached storage (NAS) systems are starting to work themselves into the big data party. They can't simply leave all that data to server vendors filling boxes with commodity disk drives. Even if Hadoop adoption is just in its early stages, the competition and confusing ...

Features in this issue

Columns in this issue

SearchSolidStateStorage

SearchVirtualStorage

SearchCloudStorage

SearchDisasterRecovery

SearchDataBackup

-ADS BY GOOGLE

Close