HADOOP - Big Data Overview and Solutions
HADOOP - Big Data Overview and Solutions
HADOOP - Big Data Overview and Solutions
• Greeting with saying ‘Namaste’ by joining Hands together following by 2-3 Minutes
Happy session, Celebrating birthday of any student of respective class and National
Anthem.
Lecture starts with- quotations’ answer writing
• Review of previous Session- Introduction
Topic to be discussed today- Today We will discuss about – HADOOP and Big Data
Due to the advent of new technologies, devices, and communication means like social
networking sites, the amount of data produced by mankind is growing rapidly every year. The
amount of data produced by us from the beginning of time till 2003 was 5 billion gigabytes. If
you pile up the data in the form of disks it may fill an entire football field. The same amount was
created in every two days in 2011, and in every ten minutes in 2013. This rate is still growing
enormously. Though all this information produced is meaningful and can be useful when
processed, it is being neglected.
Big Data is a collection of large datasets that cannot be processed using traditional computing
techniques. It is not a single technique or a tool, rather it involves many areas of business and
technology.
Big data involves the data produced by different devices and applications. Given below are some
of the fields that come under the umbrella of Big Data.
• Black Box Data: It is a component of helicopter, airplanes, and jets, etc. It captures
voices of the flight crew, recordings of microphones and earphones, and the performance
information of the aircraft.
• Social Media Data: social media such as Facebook and Twitter hold information and the
views posted by millions of people across the globe.
• Stock Exchange Data: The stock exchange data holds information about the ‘buy’ and
‘sell’ decisions made on a share of different companies made by the customers.
• Power Grid Data: The power grid data holds information consumed by a particular node
with respect to a base station.
• Transport Data: Transport data includes model, capacity, distance and availability of a
vehicle.
• Search Engine Data: Search engines retrieve lots of data from different databases.
Thus, Big Data includes huge volume, high velocity, and extensible variety of data. The data in it
will be of three types.
• Using the information kept in the social network like Facebook, the marketing agencies
are learning about the response for their campaigns, promotions, and other advertising mediums.
• Using the information in the social media like preferences and product perception of their
consumers, product companies and retail organizations are planning their production.
• Using the data regarding the previous medical history of patients, hospitals are providing
better and quick service.
Big Data Technologies
Big data technologies are important in providing more accurate analysis, which may lead to more
concrete decision-making resulting in greater operational efficiencies, cost reductions, and
reduced risks for the business.
To harness the power of big data, you would require an infrastructure that can manage and
process huge volumes of structured and unstructured data in real-time and can protect data
privacy and security.
There are various technologies in the market from different vendors including Amazon, IBM,
Microsoft, etc., to handle big data. While looking into the technologies that handle big data, we
examine the following two classes of technology:
These include systems like MongoDB that provide operational capabilities for real-time,
interactive workloads where data is primarily captured and stored.
NoSQL Big Data systems are designed to take advantage of new cloud computing architectures
that have emerged over the past decade to allow massive computations to be run inexpensively
and efficiently. This makes operational big data workloads much easier to manage, cheaper, and
faster to implement.
Some NoSQL systems can provide insights into patterns and trends based on real-time data with
minimal coding and without the need for data scientists and additional infrastructure.
These includes systems like Massively Parallel Processing (MPP) database systems and
MapReduce that provide analytical capabilities for retrospective and complex analysis that may
touch most or all of the data.
MapReduce provides a new method of analyzing data that is complementary to the capabilities
provided by SQL, and a system based on MapReduce that can be scaled up from single servers to
thousands of high- and low-end machines.
These two classes of technology are complementary and frequently deployed together.
Operational Analytical
• Capturing data
• Curation
• Storage
• Searching
• Sharing
• Transfer
• Analysis
• Presentation
To fulfill the above challenges, organizations normally take the help of enterprise servers.
HADOOP ─ BIG DATA SOLUTIONS
Traditional Enterprise Approach
In this approach, an enterprise will have a computer to store and process big data. For storage
purpose, the programmers will take the help of their choice of database vendors such as Oracle,
IBM, etc. In this approach, the user interacts with the application, which in turn handles the part
of data storage and analysis.
Limitation
This approach works fine with those applications that process less voluminous data that can be
accommodated by standard database servers, or up to the limit of the processor that is processing
the data. But when it comes to dealing with huge amounts of scalable data, it is a hectic task to
process such data through a single database bottleneck.
Google’s Solution
Google solved this problem using an algorithm called MapReduce. This algorithm divides the
task into small parts and assigns them to many computers, and collects the results from them
which when integrated, form the result dataset.
Hadoop
Using the solution provided by Google, Doug Cutting and his team developed an Open-Source
Project called HADOOP.
Hadoop runs applications using the MapReduce algorithm, where the data is processed in
parallel with others. In short, Hadoop is used to develop applications that could perform
complete statistical analysis on huge amounts of data.
Hadoop is an Apache open-source framework written in java that allows distributed processing
of large datasets across clusters of computers using simple programming models. The Hadoop
framework application works in an environment that provides distributed storage and
computation across clusters of computers. Hadoop is designed to scale up from single server to
thousands of machines, each offering local computation and storage.
Hadoop Architecture
MapReduce
The Hadoop Distributed File System (HDFS) is based on the Google File System (GFS) and
provides a distributed file system that is designed to run on commodity hardware. It has many
similarities with existing distributed file systems. However, the differences from other
distributed file systems are significant. It is highly fault-tolerant and is designed to be deployed
on low-cost hardware. It provides high throughput access to application data and is suitable for
applications having large datasets.
Apart from the above-mentioned two core components, Hadoop framework also includes the
following two modules:
• Hadoop Common: These are Java libraries and utilities required by other Hadoop
modules.
• Hadoop YARN: This is a framework for job scheduling and cluster resource
management.
It is quite expensive to build bigger servers with heavy configurations that handle large scale
processing, but as an alternative, you can tie together many commodity computers with single-
CPU, as a single functional distributed system and practically, the clustered machines can read
the dataset in parallel and provide a much higher throughput. Moreover, it is cheaper than one
high-end server. So this is the first motivational factor behind using Hadoop that it runs across
clustered and low-cost machines.
Hadoop runs code across a cluster of computers. This process includes the following core tasks
that Hadoop performs:
• Data is initially divided into directories and files. Files are divided into uniform sized
blocks of 128M and 64M (preferably 128M).
• These files are then distributed across various cluster nodes for further processing.
• HDFS, being on top of the local file system, supervises the processing.
• Performing the sort that takes place between the map and reduce stages.
Reference-
1. Online: https://www.tutorialspoint.com/hadoop/
QUESTIONS: -
Q1. Give an overview about Big Data?
Q2. What are the possible solutions to manage such big data?