Complete guide to Hadoop technology and storage

Last updated:October 2013

Essential Guide

Browse Sections

Editor's note

Hadoop technology has been discussed hand in hand with big data for some time now, but IT professionals still don't know the full extent of what the technology can do or how to use it.

The open source Hadoop framework is based on Google's MapReduce software and can process large data sets at a granular level. It offers analytics at a low cost and high speed that some analysts say can't be achieved any other way. Essential to the effectiveness of Hadoop is the Hadoop Distributed File System (HDFS), which allows parallel processing by spanning data over different nodes in a single cluster and provides fault tolerance.

However, HDFS is the source of one of the main issues users see with Hadoop technology: expanded capacity requirements due to Hadoop storing three copies of each piece of data in case a DataNode fails or is taken offline. That failover setup is necessary because each NameNode that controls the copy and distribution process of data is a single point of failure. Other complaints point to the complicated technology stemming from Hadoop's Java framework.

Despite the hurdles with Hadoop technology, analysts and users say the benefits are worth it. To help you determine that for yourself, this guide will walk you through the basics of what Hadoop technology can achieve, lay out the main concerns about the technology, and outline how it works with storage and the cloud.

1Understanding Hadoop technology and storage

Because Hadoop stores three copies of each piece of data, storage in a Hadoop cluster must be able to accommodate a large number of files. To support the Hadoop architecture, traditional storage systems may not always work. The links below explain how Hadoop clusters and HDFS work with various storage systems, including network-attached storage (NAS), SANs and object storage.

Disaster Recovery
Data Backup
Data Center
Sustainability and ESG
Close