Complete guide to Hadoop technology and storage
A comprehensive collection of articles, videos and more, hand-picked by our editors
Over the last few years, big data analytics has become all the rage. Even so, many organizations are discovering...
that their existing mining and analysis techniques simply are not up to the task of handling big data. One possible solution to this problem is to build Hadoop clusters, but they are not suitable for every situation. Let's examine some of the pros and cons of using Hadoop clusters.
What are Hadoop clusters?
A Hadoop cluster is a special type of cluster that is specifically designed for storing and analyzing huge amounts of unstructured data. A Hadoop cluster is essentially a computational cluster that distributes the data analysis workload across multiple cluster nodes that work to process the data in parallel.
Benefits of building Hadoop clusters
The primary benefit to using Hadoop clusters is that they are ideally suited to analyzing big data. Big data tends to be widely distributed and largely unstructured. The reason why Hadoop is well suited to this type of data is because Hadoop works by breaking the data into pieces and assigning each "piece" to a specific cluster node for analysis. The data does not have to be uniform because each piece of data is being handled by a separate process on a separate cluster node.
Another benefit to Hadoop clusters is scalability. One of the problems with big data analysis is that just like any other type of data, big data is always growing. Furthermore, big data is most useful when it is analyzed in real time, or as close to real time as possible. A Hadoop cluster's parallel processing capabilities certainly help with the speed of the analysis, but as the volume of data to be analyzed grows the cluster's processing power may become inadequate. Thankfully, it is possible to scale the cluster by adding additional cluster nodes.
A third benefit to Hadoop clusters is cost. This might sound strange when you consider that big data analysis is an enterprise IT function, and historically speaking, few things in enterprise IT have ever been cheap. However, Hadoop clusters can prove to be a very cost-effective solution.
There are two main reasons why Hadoop clusters tend to be inexpensive. The required software is open source, so that helps. In fact, you can download the Apache Hadoop distribution for free. Also, Hadoop costs can be held down by commodity hardware. It is possible to build a powerful Hadoop cluster without spending a fortune on server hardware.
One more benefit of Hadoop clusters is that they are resilient to failure. When a piece of data is sent to a node for analysis, the data is also replicated to other cluster nodes. That way, if a node fails, additional copies of the node's data exist elsewhere in the cluster, and the data can still be analyzed.
Deciding against Hadoop clusters
In spite of their many benefits, Hadoop clusters are not a good solution for every organization's data analysis needs. An organization with relatively little data, for example, might not benefit from a Hadoop cluster even if that data required intense analysis.
Another disadvantage to using a Hadoop cluster is that the clustering solution is based on the idea that data can be "taken apart" and analyzed by parallel processes running on separate cluster nodes. If the analysis cannot be adapted for use in a parallel processing environment, then a Hadoop cluster simply is not the right tool for the job.
Probably the most significant drawback to using a Hadoop cluster is that there is a significant learning curve associated with building, operating and supporting the cluster. Unless you happen to have a Hadoop expert in your IT department, it is going to take some time to learn how to build the cluster and perform the required data analysis.
So should you consider building a Hadoop cluster? The answer depends on whether your data analysis needs are well suited to a Hadoop cluster's capabilities. If you aren't sure whether or not a Hadoop cluster could be beneficial to your organization, then you could always download a free copy of Apache Hadoop and install it on some spare hardware to see how it works before you commit to building a large-scale cluster.