Data Science Big data

What is Hadoop – The Components, Use Cases, and Importance

What is Hadoop – The Components, Use Cases, and Importance

What is Hadoop?

Apache Hadoop is an open-source framework that is used to efficiently store and process large datasets ranging in size from gigabytes to petabytes of data.

Instead of using one large computer to store and process the data, Hadoop allows clustering multiple computers to analyze massive datasets in parallel more quickly.

Components of Hadoop

Hadoop consists of four main modules:

1. Hadoop Distributed File System (HDFS)

A distributed file system that runs on standard or low-end hardware. HDFS provides better data throughput than traditional file systems, in addition to high fault tolerance and native support of large datasets.

2. Yet Another Resource Negotiator (YARN)

Manages and monitors cluster nodes and resource usage. It schedules jobs and tasks.

3. Map Reduce

A framework that helps programs do the parallel computation on data. The map task takes input data and converts it into a dataset that can be computed in key-value pairs. The output of the map task is consumed by reduce tasks to aggregate output and provide the desired result.

4. Hadoop Common

Provides common Java libraries that can be used across all modules.

Hadoop makes it easier to use all the storage and processing capacity in cluster servers and to execute distributed processes against huge amounts of data.

Hadoop provides the building blocks on which other services and applications can be built.

Applications that collect data in various formats can place data into the Hadoop cluster by using an API operation to connect to the NameNode. The NameNode tracks the file directory structure and placement of “chunks” for each file, replicated across DataNodes.

To run a job to query the data, provide a MapReduce job made up of many maps and reduce tasks that run against the data in HDFS spread across the DataNodes. Map tasks run on each node against the input files supplied, and reducers run to aggregate and organize the final output.

Hadoop Use Cases

1. Hadoop for Telecom Industry

Let’s look at a couple of examples from the telecom industry where Big Data and Hadoop solved a critical business problem.

China Mobil Guangdong company has an extensive network of customers. Not only do they have to maintain billions of mobile call records, but it is essential to businesses that they have real-time access to call records and billing information of the customers.

Traditional database management systems couldn’t scale in this scenario, so they came up with a cost-effective solution using Hadoop. With the help of Intel tech, they used HBase to store billions of call record details. As per an estimate, nearly 30 Terabytes of data are added to their database on a monthly basis.

Another example of Big Data management in the telecom industry comes from Nokia. They store and analyze the massive volume of data from their manufactured mobile phones.

To paint a fair picture of Nokia’s Big Data, they manage 100 TB of structured data along with 500+ TB of semi-structured data. Hadoop Distributed Framework System provided by Cloudera manages a variety of Nokia’s data and processes it on a scale of petabytes.

2. Hadoop in Retail Sector

With the ever-increasing interaction over social media and retail channels, customers are comparing products, services, and prices for multiple online and store retailers.

With this kind of behaviour, consumers can quickly shift from one retailer to another, which makes it absolutely essential for companies in the retail sector to tap this information and keep track of the leaks in the sales funnel.

It is necessary for a retail company to tap retail big data analytics, using Hadoop to understand customer’s purchase behavior.

Big Data and Hadoop are being used in the retail industry for the following use cases:

  • Retail analytics for inventory forecasting.
  • Retail analytics for dynamic pricing of products.
  • Retail analytics for supply chain efficiency.
  • Retail analytics for targeted and customized promotion and marketing.
  • Retail analytics in fraud detection and prevention.

3. Hadoop in the Healthcare Sector:

No other industry has benefited from the use of Hadoop as much as the Healthcare industry has.

The Healthcare industry leverages Big Data for curing diseases, reducing medical costs, predicting and managing epidemics, and maintaining the quality of human life by keeping track of large-scale health indexes and metrics.

The big data generated in the healthcare sector is mainly due to patient record keeping and regulatory requirements.

McKinsey projected that efficient usage of Big Data and Hadoop in the healthcare industry can reduce data warehousing expenses by $300-$500 billion globally. The data generated by electronic health devices are difficult to analyze using traditional database management systems. Complexity and volume of the healthcare data is the primary driving force behind the transition from legacy systems to Hadoop in the healthcare industry. Using Hadoop on such a scale of data helps in easy and quick data representation, database design, clinical decision analytics, data querying, and fault tolerance.

4. Hadoop in the Financial Sector:

After the economic recession, most of the financial institutions and national monetary associations started maintaining a single Hadoop Cluster containing more than petabytes of financial data aggregated from multiple enterprise and legacy database systems.

Along with aggregating, banks and financial institutions started pulling in other data sources - such as customer call records, chat and weblogs, email correspondence, and others.

When such an unprecedented scale of data is analyzed with the assistance of Hadoop, MapReduce, and techniques like sentiment analysis, text processing, pattern matching, and graph creating; banks were able to identify the same customers across different sources along with accurate risk assessment.

Predicting market behaviour is another classic problem in the financial sector. Different market sources such as stock exchanges, banks, revenue, and proceeds department, securities market - hold a massive volume of data themselves but their performance is interdependent.

Hadoop provides the technological backbone by creating a platform where data from such sources can be compiled and processed in real-time.

Morgan Stanley with assets of over 350 billion is one of the world’s biggest financial services organizations. It relies on the Hadoop framework to make industry-critical investment decisions. Hadoop provides scalability and better results through its administrator and can manage petabytes of data which is not possible with traditional database systems.


If you want to learn more about big data, check out our Completely Free Big Data Course and get a step closer to becoming a Certified Big Data Engineer with this Online Big Data Course. Discover market trends, unknown correlations, customer preferences, hidden patterns that can help businesses make data-driven decisions. Learn how to systematically extract information & analyze data sets that are too large & complex through this course in Big Data Analytics.

If you love data and want to discover market trends, unknown correlations, customer preferences, hidden patterns that can help businesses make data-driven decisions. This Big Data and Hadoop Spark Course is for you! Learn how to systematically extract information & analyze data sets that are too large & complex using Hadoop & spark.