The document provides an overview of Hadoop and the Hadoop ecosystem. It discusses the history of Hadoop, how big data is defined in terms of volume, velocity, variety and veracity. It then explains what Hadoop is, the core components of HDFS and MapReduce, how Hadoop is used for distributed processing of large datasets, and how Hadoop compares to traditional RDBMS. The document also outlines other tools in the Hadoop ecosystem like Pig, Hive, HBase and gives a brief demo.
3. 3
Brief History in time
In pioneer days they used oxen for heavy pulling, and when one ox couldn’t
budge a log, they didn’t try to grow a larger ox. We shouldn’t be trying for
bigger computers, but more systems of computers.
—Grace Hopper, American Computer Scientist
9. 9
BIG
DATA
Volume
Big Data comes in on large
scale. Its on TB and even PB
Records, Transaction,
Tables , Files
Veracity
Quality, consistency,
reliability and provenance of
data
Good, bad, undefined,
inconsistency, incomplete.
Variety
Big Data extends structured,
including semi- structured and
unstructured data of all variety
text, log, xml, audio, video, stream,
flat files
Velocity
Data flown continues, time
sensitive, streaming flow
Batch, Real time, Streams,
Historic
Challenges in managing Big Data
10. 10
To overcome Big Data challenges Hadoop evolves
• Cost Effective – Commodity HW
• Big Cluster – (1000 Nodes) --- Provides Storage &
Processing
• Parallel Processing – Map reduce
• Big Storage – Memory per node * no of Nodes / RF
• Fail over mechanism – Automatic Failover
• Data Distribution
• Moving Code to data
• Heterogeneous Hardware System
(IBM,HP,AIX,Oracle Machine of any
memory and CPU configuration)
• Scalable
16. 16
Stop and Ponder
• Is Hadoop an alternative for RDBMS?
• Hadoop is not replacing the traditional data systems used for building
analytic applications – the RDBMS, EDW and MPP systems – but rather is a
complement. & Works fine together with RDBMs.
• Hadoop is being used to distill large quantities of data into something more
manageable
17. 17
Stop and Ponder
• But Don’t we know Coherence to be distributed too? Why Hadoop?
Coherence is the market leading In-Memory Data Grid. While Hadoop works fine
for large processing operations, i.e. requiring many TB of data, that can be
processed in a batch like way, there are use cases where the processing
requirements are more real-time and the data volumes are smaller, where
Coherence is a better choice than HDFS for storing the data
18. 18
Hadoop vs. RDBMS
RDBMS MapReduce
Data size Gigabytes Petabytes
Access Interactive and batch Batch
Structure Fixed schema Unstructured schema
Language SQL Procedural (Java, C++, Ruby, etc)
Integrity High Low
Scaling Nonlinear Linear
Updates Read and write Write once, read many times
Latency Low High
20. 20
Hadoop Architecture
• Hadoop Distributed File System (HDFS™): A distributed file system that
provides high-throughput access to application data.
• Hadoop MapReduce: A software framework for distributed processing of
large data sets on compute clusters.
HDFS
Map
Reduce
Hadoop
30. 30
Let’s do it again…
• Map/Reduce has 3 stages : Map/Shuffle/Reduce
• The Shuffle part is done automatically by Hadoop, you just need to
implement the Map and Reduce parts.
• You get input data as <Key,Value> for the Map part.
• In this example, the Key is the City name, and the Value is the set
of attributes : State and City yearly average temperature.
31. 31
• Since you want to regroup your temperatures by state, you’re going to get
rid of the city name, and the State will become the Key, while the
Temperature will become the Value.
32. 32
Shuffle
• Now, the shuffle task will run on the output of the Map task. It is going to
group all the values by Key, and you’ll get a List<Value>
33. 33
Reduce
• The Reduce task is the one that does the logic on the data, in our case this
is the calculation of the State yearly average temperature.
• And that’s what we will get as final output