Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Hadoop

Download as pdf or txt
Download as pdf or txt
You are on page 1of 23

Hadoop

“90% of the world’s data was


generated in the last few years.”
• Due to the advent of new technologies, devices, and communication
means like social networking sites, the amount of data produced by
mankind is growing rapidly every year.
• The amount of data produced by us from the beginning of time till
2003 was 5 billion gigabytes.
• If you pile up the data in the form of disks it may fill an entire football
field. The same amount was created in every two days in 2011, and in
every ten minutes in 2013.
• This rate is still growing enormously. Though all this information
produced is meaningful and can be useful when processed, it is being
neglected.
What is Big Data?
• Big data is a collection of large datasets that cannot be processed
using traditional computing techniques. It is not a single technique or
a tool, rather it has become a complete subject, which involves
various tools, technqiues and frameworks.
What Comes Under Big Data?

• Big data involves the data produced by different devices and


applications. Given below are some of the fields that come under the
umbrella of Big Data.
What Comes Under Big Data?

• Black Box Data − It is a component of helicopter, airplanes, and jets, etc. It


captures voices of the flight crew, recordings of microphones and earphones, and
the performance information of the aircraft.
• Social Media Data − Social media such as Facebook and Twitter hold information
and the views posted by millions of people across the globe.
• Stock Exchange Data − The stock exchange data holds information about the
‘buy’ and ‘sell’ decisions made on a share of different companies made by the
customers.
• Power Grid Data − The power grid data holds information consumed by a
particular node with respect to a base station.
• Transport Data − Transport data includes model, capacity, distance and
availability of a vehicle.
• Search Engine Data − Search engines retrieve lots of data from different
databases.
Benefits of Big Data
• Using the information kept in the social network like Facebook, the
marketing agencies are learning about the response for their
campaigns, promotions, and other advertising mediums.
• Using the information in the social media like preferences and
product perception of their consumers, product companies and retail
organizations are planning their production.
• Using the data regarding the previous medical history of patients,
hospitals are providing better and quick service.
• Thus Big Data includes huge volume, high velocity, and extensible
variety of data. The data in it will be of three types.
• Structured data − Relational data.
• Semi Structured data − XML data.
• Unstructured data − Word, PDF, Text, Media Logs.
What Comes Under Big Data?
Big Data Technologies

• Big data technologies are important in providing more accurate analysis,


which may lead to more concrete decision-making resulting in greater
operational efficiencies, cost reductions, and reduced risks for the
business.
• To harness the power of big data, you would require an infrastructure that
can manage and process huge volumes of structured and unstructured
data in realtime and can protect data privacy and security.
• There are various technologies in the market from different vendors
including Amazon, IBM, Microsoft, etc., to handle big data. While looking
into the technologies that handle big data, we examine the following two
classes of technology −
Operational Big Data
• This include systems like MongoDB that provide operational
capabilities for real-time, interactive workloads where data is
primarily captured and stored.
• NoSQL Big Data systems are designed to take advantage of new cloud
computing architectures that have emerged over the past decade to
allow massive computations to be run inexpensively and efficiently.
This makes operational big data workloads much easier to manage,
cheaper, and faster to implement.
• Some NoSQL systems can provide insights into patterns and trends
based on real-time data with minimal coding and without the need
for data scientists and additional infrastructure.
Analytical Big Data
• These includes systems like Massively Parallel Processing (MPP)
database systems and MapReduce that provide analytical capabilities
for retrospective and complex analysis that may touch most or all of
the data.
• MapReduce provides a new method of analyzing data that is
complementary to the capabilities provided by SQL, and a system
based on MapReduce that can be scaled up from single servers to
thousands of high and low end machines.
• These two classes of technology are complementary and frequently
deployed together.
Operational vs. Analytical Systems

Operational Analytical
Latency 1 ms - 100 ms 1 min - 100 min
Concurrency 1000 - 100,000 1 - 10
Access Pattern Writes and Reads Reads
Queries Selective Unselective
Data Scope Operational Retrospective
End User Customer Data Scientist
Technology NoSQL MapReduce, MPP Database
Big Data Challenges
• The major challenges associated with big data are as follows −
• Capturing data
• Curation
• Storage
• Searching
• Sharing
• Transfer
• Analysis
• Presentation
• To fulfill the above challenges, organizations normally take the help of
enterprise servers.
Introduction to Hadoop
• Hadoop is an open-source framework that allows to
store and process big data in a distributed
environment across clusters of computers using
simple programming models.
• It is designed to scale up from single server to
thousands of machines, each offering local
computation and storage.
Introduction to Hadoop
• Hadoop is named after a toy elephant belonging to
developer Doug Cutting’s son
• Derived from a web indexing software called Nutch. (Nutch
is an open-source Web search engine that can be used at
global, local, and even personal scale. )
• Google Released its two white papers namely Google File
System and Map Reduce in 2003 and 2004
• In 2011 yahoo was running its search engine across 42000
nodes
Introduction to Hadoop
• Hadoop is an Apache open source framework written in
java that allows distributed processing of large datasets
across clusters of computers using simple programming
models.
• The Hadoop framework application works in an
environment that provides distributed storage and
computation across clusters of computers.
• Hadoop is designed to scale up from single server to
thousands of machines, each offering local computation
and storage.
Hadoop Architecture
• At its core, Hadoop has two major
layers namely −
• Data Processing Engine/Computation
layer (MapReduce),
• Storage layer/Storage Engine (Hadoop
Distributed File System-HDFS).
MapReduce
• MapReduce is a parallel programming model for writing distributed
applications devised at Google for efficient processing of large
amounts of data (multi-terabyte data-sets), on large clusters
(thousands of nodes) of commodity hardware in a reliable, fault-
tolerant manner. The MapReduce program runs on Hadoop which is
an Apache open-source framework.
Hadoop Distributed File System
• The Hadoop Distributed File System (HDFS) is based on the Google
File System (GFS) and provides a distributed file system that is
designed to run on commodity hardware.
• It has many similarities with existing distributed file systems.
However, the differences from other distributed file systems are
significant. It is highly fault-tolerant and is designed to be deployed
on low-cost hardware. It provides high throughput access to
application data and is suitable for applications having large
datasets.
Hadoop Distributed File System
• Apart from the above-mentioned two core components, Hadoop
framework also includes the following two modules −
• Hadoop Common − These are Java libraries and utilities required by
other Hadoop modules.
• Hadoop YARN − This is a framework for job scheduling and cluster
resource management.
Advantages of Hadoop
• Hadoop framework allows the user to quickly write and test
distributed systems. It is efficient, and it automatic distributes the
data and work across the machines and in turn, utilizes the underlying
parallelism of the CPU cores.

• Hadoop does not rely on hardware to provide fault-tolerance and


high availability (FTHA), rather Hadoop library itself has been
designed to detect and handle failures at the application layer.
Advantages of Hadoop
• Servers can be added or removed from the cluster dynamically and
Hadoop continues to operate without interruption.

• Another big advantage of Hadoop is that apart from being open


source, it is compatible on all the platforms since it is Java based.

You might also like