Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
SlideShare a Scribd company logo
MapReduce
by examples
The code is available on:
https://github.com/andreaiacono/MapReduce
Take a look at my blog:
https://andreaiacono.blogspot.com/
MapReduce by examples
MapReduce is a programming model for
processing large data sets with a parallel,
distributed algorithm on a cluster
[src: http://en.wikipedia.org/wiki/MapReduce]
What is MapReduce?
Originally published in 2004 from Google
engineers Jeffrey Dean and Sanjay Ghemawat
MapReduce by examples
Hadoop is the open source implementation of
the model by Apache Software foundation
The main project is composed by:
- HDFS
- YARN
- MapReduce
Its ecosystem is composed by:
- Pig
- Hbase
- Hive
- Impala
- Mahout
- a lot of other tools
MapReduce by examples
Hadoop 2.x
- YARN: the resource manager, now called YARN, is now
detached from mapreduce framework
- java packages are under org.apache.hadoop.mapreduce.*

Recommended for you

Map Reduce
Map ReduceMap Reduce
Map Reduce

A MapReduce job usually splits the input data-set into independent chunks which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system.

hadoopmapreduce
Introduction to Hadoop Technology
Introduction to Hadoop TechnologyIntroduction to Hadoop Technology
Introduction to Hadoop Technology

This document discusses Hadoop, an open-source software framework for distributed storage and processing of large datasets across clusters of computers. It describes how Hadoop uses HDFS for distributed storage and fault tolerance, YARN for resource management, and MapReduce for parallel processing of large datasets. It provides details on the architecture of HDFS including the name node, data nodes, and clients. It also explains the MapReduce programming model and job execution involving map and reduce tasks. Finally, it states that as data volumes continue rising, Hadoop provides an affordable solution for large-scale data handling and analysis through its distributed and scalable architecture.

Introduction to Hadoop and Hadoop component
Introduction to Hadoop and Hadoop component Introduction to Hadoop and Hadoop component
Introduction to Hadoop and Hadoop component

This document provides an introduction to Apache Hadoop, which is an open-source software framework for distributed storage and processing of large datasets. It discusses Hadoop's main components of MapReduce and HDFS. MapReduce is a programming model for processing large datasets in a distributed manner, while HDFS provides distributed, fault-tolerant storage. Hadoop runs on commodity computer clusters and can scale to thousands of nodes.

hadoop architecturehdfshadoop
MapReduce by examples
MapReduce inspiration
The name MapReduce comes from functional programming:
- map is the name of a higher-order function that applies a given function
to each element of a list. Sample in Scala:
val numbers = List(1,2,3,4,5)
numbers.map(x => x * x) == List(1,4,9,16,25)
- reduce is the name of a higher-order function that analyze a recursive
data structure and recombine through use of a given combining
operation the results of recursively processing its constituent parts,
building up a return value. Sample in Scala:
val numbers = List(1,2,3,4,5)
numbers.reduce(_ + _) == 15
MapReduce takes an input, splits it into smaller parts, execute the code of
the mapper on every part, then gives all the results to one or more reducers
that merge all the results into one.
src: http://en.wikipedia.org/wiki/Map_(higher-order_function)
http://en.wikipedia.org/wiki/Fold_(higher-order_function)
MapReduce by examples
Overall view
MapReduce by examples
How does Hadoop work?
Init
- Hadoop divides the input file stored on HDFS into splits (tipically of the size
of an HDFS block) and assigns every split to a different mapper, trying to
assign every split to the mapper where the split physically resides
Mapper
- locally, Hadoop reads the split of the mapper line by line
- locally, Hadoop calls the method map() of the mapper for every line passing
it as the key/value parameters
- the mapper computes its application logic and emits other key/value pairs
Shuffle and sort
- locally, Hadoop's partitioner divides the emitted output of the mapper into
partitions, each of those is sent to a different reducer
- locally, Hadoop collects all the different partitions received from the
mappers and sort them by key
Reducer
- locally, Hadoop reads the aggregated partitions line by line
- locally, Hadoop calls the reduce() method on the reducer for every line of
the input
- the reducer computes its application logic and emits other key/value pairs
- locally, Hadoop writes the emitted pairs output (the emitted pairs) to HDFS
MapReduce by examples
Simplied flow (for developers)

Recommended for you

Introduction to Hadoop
Introduction to HadoopIntroduction to Hadoop
Introduction to Hadoop

This presentation discusses the follow topics What is Hadoop? Need for Hadoop History of Hadoop Hadoop Overview Advantages and Disadvantages of Hadoop Hadoop Distributed File System Comparing: RDBMS vs. Hadoop Advantages and Disadvantages of HDFS Hadoop frameworks Modules of Hadoop frameworks Features of 'Hadoop‘ Hadoop Analytics Tools

hadoopgoogle analyticsbig data analytics
Map reduce presentation
Map reduce presentationMap reduce presentation
Map reduce presentation

MapReduce is a programming model for processing large datasets in a distributed system. It involves a map step that performs filtering and sorting, and a reduce step that performs summary operations. Hadoop is an open-source framework that supports MapReduce. It orchestrates tasks across distributed servers, manages communications and fault tolerance. Main steps include mapping of input data, shuffling of data between nodes, and reducing of shuffled data.

mapreducedistributedhadoop
Sqoop
SqoopSqoop
Sqoop

Apache Sqoop efficiently transfers bulk data between Apache Hadoop and structured datastores such as relational databases. Sqoop helps offload certain tasks (such as ETL processing) from the EDW to Hadoop for efficient execution at a much lower cost. Sqoop can also be used to extract data from Hadoop and export it into external structured datastores. Sqoop works with relational databases such as Teradata, Netezza, Oracle, MySQL, Postgres, and HSQLDB

big datahadoop
MapReduce by examples
Serializable vs Writable
- Serializable stores the class name and the object representation to the
stream; other instances of the class are referred to by an handle to the
class name: this approach is not usable with random access
- For the same reason, the sorting needed for the shuffle and sort phase
can not be used with Serializable
- The deserialization process creates a new instance of the object, while
Hadoop needs to reuse objects to minimize computation
- Hadoop introduces the two interfaces Writable and WritableComparable
that solve these problem
MapReduce by examples
Writable wrappers
Java
primitive
Writable
implementation
boolean BooleanWritable
byte ByteWritable
short ShortWritable
int IntWritable
VIntWritable
float FloatWritable
long LongWritable
VLongWritable
double DoubleWritable
Java class Writable
implementation
String Text
byte[] BytesWritable
Object ObjectWritable
null NullWritable
Java
collection
Writable
implementation
array ArrayWritable
ArrayPrimitiveWritable
TwoDArrayWritable
Map MapWritable
SortedMap SortedMapWritable
enum EnumSetWritable
MapReduce by examples
Implementing Writable: the SumCount class
public class SumCount implements WritableComparable<SumCount> {
DoubleWritable sum;
IntWritable count;
public SumCount() {
set(new DoubleWritable(0), new IntWritable(0));
}
public SumCount(Double sum, Integer count) {
set(new DoubleWritable(sum), new IntWritable(count));
}
@Override
public void write(DataOutput dataOutput) throws IOException {
sum.write(dataOutput);
count.write(dataOutput);
}
@Override
public void readFields(DataInput dataInput) throws IOException {
sum.readFields(dataInput);
count.readFields(dataInput);
}
// getters, setters and Comparable overridden methods are omitted
}
MapReduce by examples
Glossary
Term Meaning
Job The whole process to execute: the input data,
the mapper and reducers execution and the
output data
Task Every job is divided among the several
mappers and reducers; a task is the job
portion that goes to every single mapper and
reducer
Split The input file is split into several splits (the
suggested size is the HDFS block size, 64Mb)
Record The split is read from mapper by default a line
at the time: each line is a record. Using a class
extending FileInputFormat, the record can
be composed by more than one line
Partition The set of all the key-value pairs that will be
sent to a single reducer. The default
partitioner uses an hash function on the key to
determine to which reducer send the data

Recommended for you

Big Data and Hadoop
Big Data and HadoopBig Data and Hadoop
Big Data and Hadoop

Big Data raises challenges about how to process such vast pool of raw data and how to aggregate value to our lives. For addressing these demands an ecosystem of tools named Hadoop was conceived.

big datahadoophdfs
Hadoop MapReduce Fundamentals
Hadoop MapReduce FundamentalsHadoop MapReduce Fundamentals
Hadoop MapReduce Fundamentals

This document provides an overview of the Hadoop MapReduce Fundamentals course. It discusses what Hadoop is, why it is used, common business problems it can address, and companies that use Hadoop. It also outlines the core parts of Hadoop distributions and the Hadoop ecosystem. Additionally, it covers common MapReduce concepts like HDFS, the MapReduce programming model, and Hadoop distributions. The document includes several code examples and screenshots related to Hadoop and MapReduce.

clouderaapache hadoopmapreduce
Hive Tutorial | Hive Architecture | Hive Tutorial For Beginners | Hive In Had...
Hive Tutorial | Hive Architecture | Hive Tutorial For Beginners | Hive In Had...Hive Tutorial | Hive Architecture | Hive Tutorial For Beginners | Hive In Had...
Hive Tutorial | Hive Architecture | Hive Tutorial For Beginners | Hive In Had...

This presentation about Hive will help you understand the history of Hive, what is Hive, Hive architecture, data flow in Hive, Hive data modeling, Hive data types, different modes in which Hive can run on, differences between Hive and RDBMS, features of Hive and a demo on HiveQL commands. Hive is a data warehouse system which is used for querying and analyzing large datasets stored in HDFS. Hive uses a query language called HiveQL which is similar to SQL. Hive issues SQL abstraction to integrate SQL queries (like HiveQL) into Java without the necessity to implement queries in the low-level Java API. Now, let us get started and understand Hadoop Hive in detail Below topics are explained in this Hive presetntation: 1. History of Hive 2. What is Hive? 3. Architecture of Hive 4. Data flow in Hive 5. Hive data modeling 6. Hive data types 7. Different modes of Hive 8. Difference between Hive and RDBMS 9. Features of Hive 10. Demo on HiveQL What is this Big Data Hadoop training course about? The Big Data Hadoop and Spark developer course have been designed to impart in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab. What are the course objectives? This course will enable you to: 1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark 2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management 3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts 4. Get an overview of Sqoop and Flume and describe how to ingest data using them 5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning 6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution 7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations 8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS 9. Gain a working knowledge of Pig and its components 10. Do functional programming in Spark 11. Understand resilient distribution datasets (RDD) in detail 12. Implement and build Spark applications 13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques 14. Understand the common use-cases of Spark and the various interactive algorithms 15. Learn Spark SQL, creating, transforming, and querying Data frames Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training

hive tutorialhive tutorial for beginnershive architecture
MapReduce by examples
Let's start coding!
MapReduce by examples
WordCount
(the Hello World! for MapReduce, available in Hadoop sources)
Input Data:
The text of the book ”Flatland”
By Edwin Abbott.
Source:
http://www.gutenberg.org/cache/epub/201/pg201.txt
We want to count the occurrences of every word
of a text file
MapReduce by examples
public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
@Override
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken().trim());
context.write(word, one);
}
}
}
WordCount mapper
MapReduce by examples
public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable>{
private IntWritable result = new IntWritable();
@Override
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
WordCount reducer

Recommended for you

Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark

Apache Spark is a In Memory Data Processing Solution that can work with existing data source like HDFS and can make use of your existing computation infrastructure like YARN/Mesos etc. This talk will cover a basic introduction of Apache Spark with its various components like MLib, Shark, GrpahX and with few examples.

sparkapache spark
Hadoop Architecture
Hadoop ArchitectureHadoop Architecture
Hadoop Architecture

This presentation discusses the following topics: Introduction Components of Hadoop MapReduce Map Task Reduce Task Anatomy of a Map Reduce

hadoopbig datadata analytics
Introduction to Storm
Introduction to Storm Introduction to Storm
Introduction to Storm

Storm is a distributed and fault-tolerant realtime computation system. It was created at BackType/Twitter to analyze tweets, links, and users on Twitter in realtime. Storm provides scalability, reliability, and ease of programming. It uses components like Zookeeper, ØMQ, and Thrift. A Storm topology defines the flow of data between spouts that read data and bolts that process data. Storm guarantees processing of all data through its reliability APIs and guarantees no data loss even during failures.

MapReduce by examples
WordCount
a 936
ab 6
abbot 3
abbott 2
abbreviated 1
abide 1
ability 1
able 9
ablest 2
abolished 1
abolition 1
about 40
above 22
abroad 1
abrogation 1
abrupt 1
abruptly 1
absence 4
absent 1
absolute 2
...
Results:
MapReduce by examples
MapReduce testing and debugging
- MRUnit is a testing framework based on Junit for unit
testing mappers, reducers, combiners (we'll see later what
they are) and the combination of the three
- Mocking frameworks can be used to mock Context or
other Hadoop objects
- LocalJobRunner is a class included in Hadoop that let us
run a complete Hadoop environment locally, in a single
JVM, that can be attached to a debugger. LocalJobRunner
can run at most one reducer
- Hadoop allows the creation of in-process mini clusters
programmatically thanks to MiniDFSCluster and
MiniMRCluster testing classes; debugging is more difficult
than LocalJobRunner because is multi-threaded and spread
over different VMs. Mini Clusters are used for testing
Hadoop sources.
MapReduce by examples
MRUnit test for WordCount
@Test
public void testMapper() throws Exception {
new MapDriver<Object, Text, Text, IntWritable>()
.withMapper(new WordCount.TokenizerMapper())
.withInput(NullWritable.get(), new Text("foo bar foo"))
.withOutput(new Text("foo"), new IntWritable(1))
.withOutput(new Text("bar"), new IntWritable(1))
.withOutput(new Text("foo"), new IntWritable(1))
.runTest();
}
@Test
public void testReducer() throws Exception {
List<IntWritable> fooValues = new ArrayList<>();
fooValues.add(new IntWritable(1));
fooValues.add(new IntWritable(1));
List<IntWritable> barValue = new ArrayList<>();
barValue.add(new IntWritable(1));
new ReduceDriver<Text, IntWritable, Text, IntWritable>()
.withReducer(new WordCount.IntSumReducer())
.withInput(new Text("foo"), fooValues)
.withInput(new Text("bar"), barValue)
.withOutput(new Text("foo"), new IntWritable(2))
.withOutput(new Text("bar"), new IntWritable(1))
.runTest();
}
MapReduce by examples
@Test
public void testMapReduce() throws Exception {
new MapReduceDriver<Object, Text, Text, IntWritable, Text, IntWritable>()
.withMapper(new WordCount.TokenizerMapper())
.withInput(NullWritable.get(), new Text("foo bar foo"))
.withReducer(new WordCount.IntSumReducer())
.withOutput(new Text("bar"), new IntWritable(1))
.withOutput(new Text("foo"), new IntWritable(2))
.runTest();
}
MRUnit test for WordCount

Recommended for you

Apache hive introduction
Apache hive introductionApache hive introduction
Apache hive introduction

Apache Hive is a data warehouse software built on top of Hadoop that allows users to query data stored in various databases and file systems using an SQL-like interface. It provides a way to summarize, query, and analyze large datasets stored in Hadoop distributed file system (HDFS). Hive gives SQL capabilities to analyze data without needing MapReduce programming. Users can build a data warehouse by creating Hive tables, loading data files into HDFS, and then querying and analyzing the data using HiveQL, which Hive then converts into MapReduce jobs.

apachehivecloud computing
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...

This presentation about Hadoop for beginners will help you understand what is Hadoop, why Hadoop, what is Hadoop HDFS, Hadoop MapReduce, Hadoop YARN, a use case of Hadoop and finally a demo on HDFS (Hadoop Distributed File System), MapReduce and YARN. Big Data is a massive amount of data which cannot be stored, processed, and analyzed using traditional systems. To overcome this problem, we use Hadoop. Hadoop is a framework which stores and handles Big Data in a distributed and parallel fashion. Hadoop overcomes the challenges of Big Data. Hadoop has three components HDFS, MapReduce, and YARN. HDFS is the storage unit of Hadoop, MapReduce is its processing unit, and YARN is the resource management unit of Hadoop. In this video, we will look into these units individually and also see a demo on each of these units. Below topics are explained in this Hadoop presentation: 1. What is Hadoop 2. Why Hadoop 3. Big Data generation 4. Hadoop HDFS 5. Hadoop MapReduce 6. Hadoop YARN 7. Use of Hadoop 8. Demo on HDFS, MapReduce and YARN What is this Big Data Hadoop training course about? The Big Data Hadoop and Spark developer course have been designed to impart an in-depth knowledge of Big Data processing using Hadoop and Spark. The course is packed with real-life projects and case studies to be executed in the CloudLab. What are the course objectives? This course will enable you to: 1. Understand the different components of the Hadoop ecosystem such as Hadoop 2.7, Yarn, MapReduce, Pig, Hive, Impala, HBase, Sqoop, Flume, and Apache Spark 2. Understand Hadoop Distributed File System (HDFS) and YARN as well as their architecture, and learn how to work with them for storage and resource management 3. Understand MapReduce and its characteristics, and assimilate some advanced MapReduce concepts 4. Get an overview of Sqoop and Flume and describe how to ingest data using them 5. Create database and tables in Hive and Impala, understand HBase, and use Hive and Impala for partitioning 6. Understand different types of file formats, Avro Schema, using Arvo with Hive, and Sqoop and Schema evolution 7. Understand Flume, Flume architecture, sources, flume sinks, channels, and flume configurations 8. Understand HBase, its architecture, data storage, and working with HBase. You will also understand the difference between HBase and RDBMS 9. Gain a working knowledge of Pig and its components 10. Do functional programming in Spark 11. Understand resilient distribution datasets (RDD) in detail 12. Implement and build Spark applications 13. Gain an in-depth understanding of parallel processing in Spark and Spark RDD optimization techniques 14. Understand the common use-cases of Spark and the various interactive algorithms 15. Learn Spark SQL, creating, transforming, and querying Data frames Learn more at https://www.simplilearn.com/big-data-and-analytics/big-data-and-hadoop-training

hadoop tutorial for beginnersapache hadoop tutorial for beginnershadoop tutorial
MapReduce Example | MapReduce Programming | Hadoop MapReduce Tutorial | Edureka
MapReduce Example | MapReduce Programming | Hadoop MapReduce Tutorial | Edureka MapReduce Example | MapReduce Programming | Hadoop MapReduce Tutorial | Edureka
MapReduce Example | MapReduce Programming | Hadoop MapReduce Tutorial | Edureka

This Hadoop tutorial on MapReduce Example ( Mapreduce Tutorial Blog Series: https://goo.gl/w0on2G ) will help you understand how to write a MapReduce program in Java. You will also get to see multiple mapreduce examples on Analytics and Testing. Check our complete Hadoop playlist here: https://goo.gl/ExJdZs Below are the topics covered in this tutorial: 1) MapReduce Way 2) Classes and Packages in MapReduce 3) Explanation of a Complete MapReduce Program 4) MapReduce Examples on Analytics 5) MapReduce Example on Testing - MRUnit

mapreduce example in hadoopmapreduce algorithmhadoop tutorial
MapReduce by examples
TopN
Input Data:
The text of the book ”Flatland”
By E. Abbott.
Source:
http://www.gutenberg.org/cache/epub/201/pg201.txt
We want to find the top-n used words of a text file
MapReduce by examples
public static class TopNMapper extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
private String tokens = "[_|$#<>^=[]*/,;,.-:()?!"']";
@Override
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
String cleanLine = value.toString().toLowerCase().replaceAll(tokens, " ");
StringTokenizer itr = new StringTokenizer(cleanLine);
while (itr.hasMoreTokens()) {
word.set(itr.nextToken().trim());
context.write(word, one);
}
}
}
TopN mapper
MapReduce by examples
public static class TopNReducer extends Reducer<Text, IntWritable, Text, IntWritable> {
private Map<Text, IntWritable> countMap = new HashMap<>();
@Override
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
countMap.put(new Text(key), new IntWritable(sum));
}
@Override
protected void cleanup(Context context) throws IOException, InterruptedException {
Map<Text, IntWritable> sortedMap = sortByValues(countMap);
int counter = 0;
for (Text key: sortedMap.keySet()) {
if (counter ++ == 20) {
break;
}
context.write(key, sortedMap.get(key));
}
}
}
TopN reducer
MapReduce by examples
TopN
the 2286
of 1634
and 1098
to 1088
a 936
i 735
in 713
that 499
is 429
you 419
my 334
it 330
as 322
by 317
not 317
or 299
but 279
with 273
for 267
be 252
...
Results:

Recommended for you

Hadoop Tutorial For Beginners
Hadoop Tutorial For BeginnersHadoop Tutorial For Beginners
Hadoop Tutorial For Beginners

The presentation covers following topics: 1) Hadoop Introduction 2) Hadoop nodes and daemons 3) Architecture 4) Hadoop best features 5) Hadoop characteristics. For more further knowledge of Hadoop refer the link: http://data-flair.training/blogs/hadoop-tutorial-for-beginners/

hadoopapache hadoophadoop introduction
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to Redis

Redis is an in-memory key-value store that is often used as a database, cache, and message broker. It supports various data structures like strings, hashes, lists, sets, and sorted sets. While data is stored in memory for fast access, Redis can also persist data to disk. It is widely used by companies like GitHub, Craigslist, and Engine Yard to power applications with high performance needs.

technologynosqldatabases
Hadoop ecosystem
Hadoop ecosystemHadoop ecosystem
Hadoop ecosystem

Hadoop became the most common systm to store big data. With Hadoop, many supporting systems emerged to complete the aspects that are missing in Hadoop itself. Together they form a big ecosystem. This presentation covers some of those systems. While not capable to cover too many in one presentation, I tried to focus on the most famous/popular ones and on the most interesting ones.

hadoop scalding hive impala parquet cloudera
MapReduce by examples
TopN
In the shuffle and sort phase, the partioner will send
every single word (the key) with the value ”1” to the
reducers.
All these network transmissions can be minimized if
we reduce locally the data that the mapper will emit.
This is obtained by a Combiner.
MapReduce by examples
public static class Combiner extends Reducer<Text, IntWritable, Text, IntWritable> {
@Override
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
context.write(key, new IntWritable(sum));
}
}
TopN combiner
MapReduce by examples
TopN
Without combiner
Map input records=4239
Map output records=37817
Map output bytes=359621
Input split bytes=118
Combine input records=0
Combine output records=0
Reduce input groups=4987
Reduce shuffle bytes=435261
Reduce input records=37817
Reduce output records=20
Hadoop output
With combiner
Map input records=4239
Map output records=37817
Map output bytes=359621
Input split bytes=116
Combine input records=37817
Combine output records=20
Reduce input groups=20
Reduce shuffle bytes=194
Reduce input records=20
Reduce output records=20
MapReduce by examples
Combiners
If the function computed is
- commutative [a + b = b + a]
- associative [a + (b + c) = (a + b) + c]
we can reuse the reducer as a combiner!
Max function works:
max (max(a,b), max(c,d,e)) = max (a,b,c,d,e)
Mean function does not work:
mean(mean(a,b), mean(c,d,e)) != mean(a,b,c,d,e)

Recommended for you

hadoop.ppt
hadoop.ppthadoop.ppt
hadoop.ppt

This document provides an overview of key concepts in Hadoop including: - Hadoop was tested on a 4000 node cluster with 32,000 cores and 16 petabytes of storage. - MapReduce is Hadoop's programming model and consists of mappers that process input splits in parallel, and reducers that combine the outputs of the mappers. - The JobTracker manages jobs, TaskTrackers run tasks on slave nodes, and Tasks are individual mappers or reducers. Data is distributed to nodes implicitly based on the HDFS file distribution. Configurations are set using a JobConf object.

Hadoop ecosystem
Hadoop ecosystemHadoop ecosystem
Hadoop ecosystem

Hadoop became the most common systm to store big data. With Hadoop, many supporting systems emerged to complete the aspects that are missing in Hadoop itself. Together they form a big ecosystem. This presentation covers some of those systems. While not capable to cover too many in one presentation, I tried to focus on the most famous/popular ones and on the most interesting ones.

hivescaldingparquet
Hadoop 3
Hadoop 3Hadoop 3
Hadoop 3

This document provides a technical introduction to Hadoop, including: - Hadoop has been tested on a 4000 node cluster with 32,000 cores and 16 petabytes of storage. - Key Hadoop concepts are explained, including jobs, tasks, task attempts, mappers, reducers, and the JobTracker and TaskTracker processes. - The flow of a MapReduce job is described, from the client submitting the job to the JobTracker, TaskTrackers running tasks on data splits using the mapper and reducer classes, and writing outputs.

MapReduce by examples
Combiners
Advantages of using combiners
- Network transmissions are minimized
Disadvantages of using combiners
- Hadoop does not guarantee the execution of a combiner:
it can be executed 0, 1 or multiple times on the same input
- Key-value pairs emitted from mapper are stored in local
filesystem, and the execution of the combiner could cause
expensive IO operations
MapReduce by examples
private Map<String, Integer> countMap = new HashMap<>();
private String tokens = "[_|$#<>^=[]*/,;,.-:()?!"']";
@Override
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
String cleanLine = value.toString().toLowerCase().replaceAll(tokens, " ");
StringTokenizer itr = new StringTokenizer(cleanLine);
while (itr.hasMoreTokens()) {
String word = itr.nextToken().trim();
if (countMap.containsKey(word)) {
countMap.put(word, countMap.get(word)+1);
}
else {
countMap.put(word, 1);
}
}
}
@Override
protected void cleanup(Context context) throws InterruptedException {
for (String key: countMap.keySet()) {
context.write(new Text(key), new IntWritable(countMap.get(key)));
}
}
TopN in-mapper combiner
MapReduce by examples
private Map<Text, IntWritable> countMap = new HashMap<>();
@Override
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
countMap.put(new Text(key), new IntWritable(sum));
}
@Override
protected void cleanup(Context context) throws InterruptedException {
Map<Text, IntWritable> sortedMap = sortByValues(countMap);
int counter = 0;
for (Text key: sortedMap.keySet()) {
if (counter ++ == 20) {
break;
}
context.write(key, sortedMap.get(key));
}
}
TopN in-mapper reducer
MapReduce by examples
Combiners - output
Without combiner
Map input records=4239
Map output records=37817
Map output bytes=359621
Input split bytes=118
Combine input records=0
Combine output records=0
Reduce input groups=4987
Reduce shuffle bytes=435261
Reduce input records=37817
Reduce output records=20
With combiner
Map input records=4239
Map output records=37817
Map output bytes=359621
Input split bytes=116
Combine input records=37817
Combine output records=20
Reduce input groups=20
Reduce shuffle bytes=194
Reduce input records=20
Reduce output records=20
With in-mapper
Map input records=4239
Map output records=4987
Map output bytes=61522
Input split bytes=118
Combine input records=0
Combine output records=0
Reduce input groups=4987
Reduce shuffle bytes=71502
Reduce input records=4987
Reduce output records=20
With in-mapper and combiner
Map input records=4239
Map output records=4987
Map output bytes=61522
Input split bytes=116
Combine input records=4987
Combine output records=20
Reduce input groups=20
Reduce shuffle bytes=194
Reduce input records=20
Reduce output records=20

Recommended for you

Hadoop 2
Hadoop 2Hadoop 2
Hadoop 2

This document provides a technical introduction to Hadoop, including: - Hadoop has been tested on a 4000 node cluster with 32,000 cores and 16 petabytes of storage. - Key Hadoop concepts are explained, including jobs, tasks, task attempts, mappers, reducers, and the JobTracker and TaskTrackers. - The process of launching a MapReduce job is described, from the client submitting the job to the JobTracker distributing tasks to TaskTrackers and running the user-defined mapper and reducer classes.

Hadoop interview question
Hadoop interview questionHadoop interview question
Hadoop interview question

Hadoop is an open source framework for distributed storage and processing of vast amounts of data across clusters of computers. It uses a master-slave architecture with a single JobTracker master and multiple TaskTracker slaves. The JobTracker schedules tasks like map and reduce jobs on TaskTrackers, which each run task instances in separate JVMs. It monitors task progress and reschedules failed tasks. Hadoop uses MapReduce programming model where the input is split and mapped in parallel, then outputs are shuffled, sorted, and reduced to form the final results.

ccdhcloudera certificationhadoop
Hadoop interview questions
Hadoop interview questionsHadoop interview questions
Hadoop interview questions

The document provides interview questions and answers related to Hadoop. It discusses common InputFormats in Hadoop like TextInputFormat, KeyValueInputFormat, and SequenceFileInputFormat. It also describes concepts like InputSplit, RecordReader, partitioner, combiner, job tracker, task tracker, jobs and tasks relationship, debugging Hadoop code, and handling lopsided jobs. HDFS, its architecture, replication, and reading files from HDFS is also covered.

hadoopmapreducehadoop training in hyderabad
MapReduce by examples
Mean
Input Data:
Temperature in Milan
(DDMMYYY, MIN, MAX)
01012000, -4.0, 5.0
02012000, -5.0, 5.1
03012000, -5.0, 7.7
…
29122013, 3.0, 9.0
30122013, 0.0, 9.8
31122013, 0.0, 9.0
We want to find the mean max temperature for every month
Data source:
http://archivio-meteo.distile.it/tabelle-dati-archivio-meteo/
MapReduce by examples
Mean mapper
private Map<String, List<Double>> maxMap = new HashMap<>();
@Override
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
String[] values = value.toString().split((","));
if (values.length != 3) return;
String date = values[DATE];
Text month = new Text(date.substring(2));
Double max = Double.parseDouble(values[MAX]);
if (!maxMap.containsKey(month)) {
maxMap.put(month, new ArrayList<Double>());
}
maxMap.get(month).add(max);
}
@Override
protected void cleanup(Mapper.Context context) throws InterruptedException {
for (Text month: maxMap.keySet()) {
List<Double> temperatures = maxMap.get(month);
Double sum = 0d;
for (Double max: temperatures) {
sum += max;
}
context.write(month, new DoubleWritable(sum));
}
}
MapReduce by examples
Is this correct?
Mean mapper
private Map<String, List<Double>> maxMap = new HashMap<>();
@Override
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
String[] values = value.toString().split((","));
if (values.length != 3) return;
String date = values[DATE];
Text month = new Text(date.substring(2));
Double max = Double.parseDouble(values[MAX]);
if (!maxMap.containsKey(month)) {
maxMap.put(month, new ArrayList<Double>());
}
maxMap.get(month).add(max);
}
@Override
protected void cleanup(Mapper.Context context) throws InterruptedException {
for (Text month: maxMap.keySet()) {
List<Double> temperatures = maxMap.get(month);
Double sum = 0d;
for (Double max: temperatures) {
sum += max;
}
context.write(month, new DoubleWritable(sum));
}
}
MapReduce by examples
Mean
Mapper #1: lines 1, 2
Mapper #2: lines 3, 4, 5
Mapper#1: mean = (10.0 + 20.0) / 2 = 15.0
Mapper#2: mean = (2.0 + 4.0 + 3.0) / 3 = 3.0
Reducer mean = (15.0 + 3.0) / 2 = 9.0
But the correct mean is:
(10.0 + 20.0 + 2.0 + 4.0 + 3.0) / 5 = 7.8
Sample input data:
01012000, 0.0, 10.0
02012000, 0.0, 20.0
03012000, 0.0, 2.0
04012000, 0.0, 4.0
05012000, 0.0, 3.0
Not correct!

Recommended for you

MAP REDUCE IN DATA SCIENCE.pptx
MAP REDUCE IN DATA SCIENCE.pptxMAP REDUCE IN DATA SCIENCE.pptx
MAP REDUCE IN DATA SCIENCE.pptx

Hadoop is an open-source software framework for distributed storage and processing of large datasets across clusters of computers. It provides reliable storage through HDFS and distributed processing via MapReduce. HDFS handles storage and MapReduce provides a programming model for parallel processing of large datasets across a cluster. The MapReduce framework consists of a mapper that processes input key-value pairs in parallel, and a reducer that aggregates the output of the mappers by key.

Recommender.system.presentation.pjug.05.20.2014
Recommender.system.presentation.pjug.05.20.2014Recommender.system.presentation.pjug.05.20.2014
Recommender.system.presentation.pjug.05.20.2014

This document provides an overview of applied recommender systems. It discusses Hadoop, MapReduce, Hive, Mahout and collaborative filtering recommender algorithms. Hadoop is used to process large datasets in parallel across clusters. MapReduce is the programming model and Hive provides a SQL-like interface. Mahout contains machine learning libraries including collaborative filtering algorithms to generate recommendations. Pearson correlation is discussed as an item-item collaborative filtering approach.

Big-data-analysis-training-in-mumbai
Big-data-analysis-training-in-mumbaiBig-data-analysis-training-in-mumbai
Big-data-analysis-training-in-mumbai

Vibrant Technologies is headquarted in Mumbai,India.We are the best Hadoop training provider in Navi Mumbai who provides Live Projects to students.We provide Corporate Training also.We are Best Hadoop classes in Mumbai according to our students and corporates

bigdata-classes-in-mumbaibig-data-analysis-training-in-mumbaibigdata-course-in-mumbai
MapReduce by examples
private Map<Text, List<Double>> maxMap = new HashMap<>();
@Override
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
String[] values = value.toString().split((","));
if (values.length != 3) return;
String date = values[DATE];
Text month = new Text(date.substring(2));
Double max = Double.parseDouble(values[MAX]);
if (!maxMap.containsKey(month)) {
maxMap.put(month, new ArrayList<Double>());
}
maxMap.get(month).add(max);
}
@Override
protected void cleanup(Context context) throws InterruptedException {
for (Text month: maxMap.keySet()) {
List<Double> temperatures = maxMap.get(month);
Double sum = 0d;
for (Double max: temperatures) sum += max;
context.write(month, new SumCount(sum, temperatures.size()));
}
}
Mean mapper
This is correct!
MapReduce by examples
private Map<Text, SumCount> sumCountMap = new HashMap<>();
@Override
public void reduce(Text key, Iterable<SumCount> values, Context context)
throws IOException, InterruptedException {
SumCount totalSumCount = new SumCount();
for (SumCount sumCount : values) {
totalSumCount.addSumCount(sumCount);
}
sumCountMap.put(new Text(key), totalSumCount);
}
@Override
protected void cleanup(Context context) throws InterruptedException {
for (Text month: sumCountMap.keySet()) {
double sum = sumCountMap.get(month).getSum().get();
int count = sumCountMap.get(month).getCount().get();
context.write(month, new DoubleWritable(sum/count));
}
}
Mean reducer
MapReduce by examples
Mean
022012 7.230769230769231
022013 7.2
022010 7.851851851851852
022011 9.785714285714286
032013 10.741935483870968
032010 13.133333333333333
032012 18.548387096774192
032011 13.741935483870968
022003 9.278571428571428
022004 10.41034482758621
022005 9.146428571428572
022006 8.903571428571428
022000 12.344444444444441
022001 12.164285714285715
022002 11.839285714285717
...
Results:
MapReduce by examples
Mean
Result:
R code to plot data:
temp <- read.csv(file="results.txt", sep="t", header=0)
names(temp) <- c("date","temperature")
ym <- as.yearmon(temp$date, format = "%m-%Y");
year <- format(ym, "%Y")
month <- format(ym, "%m")
ggplot(temp, aes(x=month, y=temperature, group=year)) + geom_line(aes(colour = year))

Recommended for you

Hadoop 31-frequently-asked-interview-questions
Hadoop 31-frequently-asked-interview-questionsHadoop 31-frequently-asked-interview-questions
Hadoop 31-frequently-asked-interview-questions

The document contains 31 questions and answers related to Hadoop concepts. It covers topics like common input formats in Hadoop, differences between TextInputFormat and KeyValueInputFormat, what are InputSplits and how they are created, how partitioning, shuffling and sorting occurs after the map phase, what is a combiner, functions of JobTracker and TaskTracker, how speculative execution works, using distributed cache and counters, setting number of mappers/reducers, writing custom partitioners, debugging Hadoop jobs, and failure handling processes for production Hadoop jobs.

data warehousingbig datahadoop
Behm Shah Pagerank
Behm Shah PagerankBehm Shah Pagerank
Behm Shah Pagerank

This document provides an introduction to MapReduce and Hadoop, including an overview of computing PageRank using MapReduce. It discusses how MapReduce addresses challenges of parallel programming by hiding details of distributed systems. It also demonstrates computing PageRank on Hadoop through parallel matrix multiplication and implementing custom file formats.

Hadoop interview questions - Softwarequery.com
Hadoop interview questions - Softwarequery.comHadoop interview questions - Softwarequery.com
Hadoop interview questions - Softwarequery.com

Hadoop interview questions for freshers and experienced people. This is the best place for all beginners and Experts who are eager to learn Hadoop Tutorial from the scratch. Read more here http://softwarequery.com/hadoop/

hadoop mapreduce frameworkhadoop tutorialhadoop inteview questions
MapReduce by examples
Join
Input Data - Users file:
"user_ptr_id" "reputation" "gold" "silver" "bronze"
"100006402" "18" "0" "0" "0"
"100022094" "6354" "4" "12" "50"
"100018705" "76" "0" "3" "4"
…
Input Data - Posts file:
"id" "title" "tagnames" "author_id" "body" "node_type" "parent_id" "abs_parent_id" "added_at" "score" …
"5339" "Whether pdf of Unit and Homework is available?" "cs101 pdf" "100000458" "" "question" "N" "N"
"2012-02-25 08:09:06.787181+00" "1"
"2312" "Feedback on Audio Quality" "cs101 production audio" "100005361" "<p>We are looking for feedback on
the audio in our videos. Tell us what you think and try to be as <em>specific</em> as possible.</p>" "question"
"N" "N" "2012-02-23 00:28:02.321344+00" "2"
"2741" "where is the sample page for homework?" "cs101 missing_info homework" "100001178" "<p>I am sorry if I
am being a nob ... but I do not seem to find any information regarding the sample page reffered to on the 1
question of homework 1." "question" "N" "N" "2012-02-23 09:15:02.270861+00" "0"
...
We want to combine information from the users file with
Information from the posts file (a join)
Data source:
http://content.udacity-data.com/course/hadoop/forum_data.tar.gz
MapReduce by examples
@Override
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
FileSplit fileSplit = (FileSplit) context.getInputSplit();
String filename = fileSplit.getPath().getName();
String[] fields = value.toString().split(("t"));
if (filename.equals("forum_nodes_no_lf.tsv")) {
if (fields.length > 5) {
String authorId = fields[3].substring(1, fields[3].length() - 1);
String type = fields[5].substring(1, fields[5].length() - 1);
if (type.equals("question")) {
context.write(new Text(authorId), one);
}
}
}
else {
String authorId = fields[0].substring(1, fields[0].length() - 1);
String reputation = fields[1].substring(1, fields[1].length() - 1);
try {
int reputationValue = Integer.parseInt(reputation) + 2;
context.write(new Text(authorId),new IntWritable(reputationValue));
}
catch (NumberFormatException nfe) {
// just skips this record
}
}
}
Join mapper
MapReduce by examples
@Override
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException {
int postsNumber = 0;
int reputation = 0;
String authorId = key.toString();
for (IntWritable value : values) {
int intValue = value.get();
if (intValue == 1) {
postsNumber ++;
}
else {
reputation = intValue -2;
}
}
context.write(new Text(authorId), new Text(reputation + "t" + postsNumber));
}
Join reducer
MapReduce by examples
Join
USER_ID REPUTATION SCORE
00081537 1019 3
100011949 12 1
100105405 36 1
100000628 60 2
100011948 231 1
100000629 2090 1
100000623 1 2
100011945 457 4
100000624 167 1
100011944 114 3
100000625 1 1
100000626 93 1
100011942 11 1
100000620 1 1
100011940 35 1
100000621 2 1
100080016 11 2
100080017 53 1
100081549 1 1
...
Results:

Recommended for you

Lecture 2 part 3
Lecture 2 part 3Lecture 2 part 3
Lecture 2 part 3

The document discusses key concepts related to Hadoop including its components like HDFS, MapReduce, Pig, Hive, and HBase. It provides explanations of HDFS architecture and functions, how MapReduce works through map and reduce phases, and how higher-level tools like Pig and Hive allow for more simplified programming compared to raw MapReduce. The summary also mentions that HBase is a NoSQL database that provides fast random access to large datasets on Hadoop, while HCatalog provides a relational abstraction layer for HDFS data.

Hadoop pig
Hadoop pigHadoop pig
Hadoop pig

Hadoop and Pig are tools for analyzing large datasets. Hadoop uses MapReduce and HDFS for distributed processing and storage. Pig provides a high-level language for expressing data analysis jobs that are compiled into MapReduce programs. Common tasks like joins, filters, and grouping are built into Pig for easier programming compared to lower-level MapReduce.

hadoop-spark.ppt
hadoop-spark.ppthadoop-spark.ppt
hadoop-spark.ppt

Apache Spark is written in Scala programming language that compiles the program code into byte code for the JVM for spark big data processing. The open source community has developed a wonderful utility for spark python big data processing known as PySpark.

MapReduce by examples
Join
Result:
R code to plot data:
users <- read.csv(file="part-r-00000",sep='t', header=0)
users$V2[which(users$V2 > 10000,)] <- 0
plot(users$V2, users$V3, xlab="Reputation", ylab="Number of posts", pch=19, cex=0.4)
MapReduce by examples
K-means
Input Data:
A random set of points
2.2705 0.9178
1.8600 2.1002
2.0915 1.3679
-0.1612 0.8481
-1.2006 -1.0423
1.0622 0.3034
0.5138 2.5542
...
We want to aggregate 2D points in clusters using
K-means algorithm
R code to generate dataset:
N <- 100
x <- rnorm(N)+1; y <- rnorm(N)+1; dat <- data.frame(x, y)
x <- rnorm(N)+5; y <- rnorm(N)+1; dat <- rbind(dat, data.frame(x, y))
x <- rnorm(N)+1; y <- rnorm(N)+5; dat <- rbind(dat, data.frame(x, y))
K-means algorithm
MapReduce by examples
K-means mapper
@Override
protected void setup(Context context) throws IOException, InterruptedException {
URI[] cacheFiles = context.getCacheFiles();
centroids = Utils.readCentroids(cacheFiles[0].toString());
}
@Override
public void map(Object key, Text value, Context context)
throws IOException, InterruptedException {
String[] xy = value.toString().split(" ");
double x = Double.parseDouble(xy[0]);
double y = Double.parseDouble(xy[1]);
int index = 0;
double minDistance = Double.MAX_VALUE;
for (int j = 0; j < centroids.size(); j++) {
double cx = centroids.get(j)[0];
double cy = centroids.get(j)[1];
double distance = Utils.euclideanDistance(cx, cy, x, y);
if (distance < minDistance) {
index = j;
minDistance = distance;
}
}
context.write(new IntWritable(index), value);
}

Recommended for you

Stratosphere System Overview Big Data Beers Berlin. 20.11.2013
Stratosphere System Overview Big Data Beers Berlin. 20.11.2013Stratosphere System Overview Big Data Beers Berlin. 20.11.2013
Stratosphere System Overview Big Data Beers Berlin. 20.11.2013

Stratosphere is the next generation big data processing engine. These slides introduce the most important features of Stratosphere by comparing it with Apache Hadoop. For more information, visit stratosphere.eu Based on university research, it is now a completely open-source, community driven development with focus on stability and usability.

bigdatajavastratosphere
Getting started with Hadoop, Hive, and Elastic MapReduce
Getting started with Hadoop, Hive, and Elastic MapReduceGetting started with Hadoop, Hive, and Elastic MapReduce
Getting started with Hadoop, Hive, and Elastic MapReduce

1) The document provides an overview of tools for distributed computing including MapReduce, Hadoop, Hive, and Elastic MapReduce. 2) It discusses getting started with Elastic MapReduce using Python with mrjob or the AWS command line and challenges with getting started with Hive. 3) Potential pitfalls with EMR are also outlined such as JVM memory issues and problems with multiple small output files.

mapreducehadoophive
Writing MapReduce Programs using Java | Big Data Hadoop Spark Tutorial | Clou...
Writing MapReduce Programs using Java | Big Data Hadoop Spark Tutorial | Clou...Writing MapReduce Programs using Java | Big Data Hadoop Spark Tutorial | Clou...
Writing MapReduce Programs using Java | Big Data Hadoop Spark Tutorial | Clou...

Big Data with Hadoop & Spark Training: http://bit.ly/2kyXPo0 This CloudxLab Writing MapReduce Programs tutorial helps you to understand how to write MapReduce Programs using Java in detail. Below are the topics covered in this tutorial: 1) Why MapReduce? 2) Write a MapReduce Job to Count Unique Words in a Text File 3) Create Mapper and Reducer in Java 4) Create Driver 5) MapReduce Input Splits, Secondary Sorting, and Partitioner 6) Combiner Functions in MapReduce 7) Job Chaining and Pipes in MapReduce

cloudxlabhadoopapache hadoop
MapReduce by examples
K-means reducer
public class KMeansReducer extends Reducer<IntWritable, Text, Text, IntWritable> {
@Override
protected void reduce(IntWritable key, Iterable<Text> values, Context context)
throws IOException, InterruptedException {
Double mx = 0d;
Double my = 0d;
int counter = 0;
for (Text value: values) {
String[] temp = value.toString().split(" ");
mx += Double.parseDouble(temp[0]);
my += Double.parseDouble(temp[1]);
counter ++;
}
mx = mx / counter;
my = my / counter;
String centroid = mx + " " + my;
context.write(new Text(centroid), key);
}
}
MapReduce by examples
K-means driver - 1public static void main(String[] args) throws Exception {
Configuration configuration = new Configuration();
String[] otherArgs = new GenericOptionsParser(configuration, args).getRemainingArgs();
if (otherArgs.length != 3) {
System.err.println("Usage: KMeans <in> <out> <clusters_number>");
System.exit(2);
}
int centroidsNumber = Integer.parseInt(otherArgs[2]);
configuration.setInt(Constants.CENTROID_NUMBER_ARG, centroidsNumber);
configuration.set(Constants.INPUT_FILE, otherArgs[0]);
List<Double[]> centroids = Utils.createRandomCentroids(centroidsNumber);
String centroidsFile = Utils.getFormattedCentroids(centroids);
Utils.writeCentroids(configuration, centroidsFile);
boolean hasConverged = false;
int iteration = 0;
do {
configuration.set(Constants.OUTPUT_FILE, otherArgs[1] + "-" + iteration);
if (!launchJob(configuration)) {
System.exit(1);
}
String newCentroids = Utils.readReducerOutput(configuration);
if (centroidsFile.equals(newCentroids)) {
hasConverged = true;
}
else {
Utils.writeCentroids(configuration, newCentroids);
}
centroidsFile = newCentroids;
iteration++;
} while (!hasConverged);
writeFinalData(configuration, Utils.getCentroids(centroidsFile));
}
MapReduce by examples
private static boolean launchJob(Configuration config) {
Job job = Job.getInstance(config);
job.setJobName("KMeans");
job.setJarByClass(KMeans.class);
job.setMapperClass(KMeansMapper.class);
job.setReducerClass(KMeansReducer.class);
job.setMapOutputKeyClass(IntWritable.class);
job.setMapOutputValueClass(Text.class);
job.setNumReduceTasks(1);
job.addCacheFile(new Path(Constants.CENTROIDS_FILE).toUri());
FileInputFormat.addInputPath(job, new Path(config.get(Constants.INPUT_FILE)));
FileOutputFormat.setOutputPath(job, new Path(config.get(Constants.OUTPUT_FILE)));
return job.waitForCompletion(true);
}
K-means driver - 2
MapReduce by examples
K-means
Results:
4.5700 0.5510 2
4.5179 0.6120 2
4.1978 1.5706 2
5.2358 1.7982 2
1.747 3.9052 0
1.0445 5.0108 0
-0.6105 4.7576 0
0.7108 2.8032 1
1.3450 3.9558 0
1.2272 4.9238 0
...
R code to plot data:
points <- read.csv(file="final-data", sep="t", header=0)
colnames(points)[1] <- "x"
colnames(points)[2] <- "y"
plot(points$x, points$y, col= points$V3+2)

Recommended for you

Processing massive amount of data with Map Reduce using Apache Hadoop - Indi...
Processing massive amount of data with Map Reduce using Apache Hadoop  - Indi...Processing massive amount of data with Map Reduce using Apache Hadoop  - Indi...
Processing massive amount of data with Map Reduce using Apache Hadoop - Indi...

This document provides an overview of MapReduce and Hadoop. It describes the Map and Reduce functions, explaining that Map applies a function to each element of a list and Reduce reduces a list to a single value. It gives examples of Map and Reduce using employee salary data. It then discusses Hadoop and its core components HDFS for distributed storage and MapReduce for distributed processing. Key aspects covered include the NameNode, DataNodes, input/output formats, and the job launch process. It also addresses some common questions around small files, large files, and accessing SQL data from Hadoop.

The Pregel Programming Model with Spark GraphX
The Pregel Programming Model with Spark GraphXThe Pregel Programming Model with Spark GraphX
The Pregel Programming Model with Spark GraphX

GraphX is Apache Spark's API for graph distributed computing based on the Pregel programming model. In this talk we'll see a brief introduction to Pregel and then we'll focus on transforming standard graph algorithms in their distributed counterpart using GraphX to speedup performance in a distributed environment.

graphgraphxapache spark
Graphs are everywhere! Distributed graph computing with Spark GraphX
Graphs are everywhere! Distributed graph computing with Spark GraphXGraphs are everywhere! Distributed graph computing with Spark GraphX
Graphs are everywhere! Distributed graph computing with Spark GraphX

This document discusses GraphX, a graph processing system built on Apache Spark. It defines what graphs are, including vertices and edges. It explains that GraphX uses Resilient Distributed Datasets (RDDs) to keep data in memory for iterative graph algorithms. GraphX implements the Pregel computational model where each vertex can modify its state, receive and send messages to neighbors each superstep until halting. The document provides examples of graph algorithms and notes when GraphX is well-suited versus a graph database.

sparkapache sparkgraphx
MapReduce by examples
Hints
- Use MapReduce only if you have really big data: SQL or scripting
are less expensive in terms of time needed to obtain the same
results
- Use a lot of defensive checks: when we have a lot of data, we don't
want the computation to be stopped by a trivial NPE :-)
- Testing can save a lot of time!
Thanks!
The code is available on:
https://github.com/andreaiacono/MapReduce
Take a look at my blog:
https://andreaiacono.blogspot.com/

More Related Content

What's hot

Spark SQL
Spark SQLSpark SQL
Spark SQL
Joud Khattab
 
Hadoop ecosystem
Hadoop ecosystemHadoop ecosystem
Hadoop ecosystem
Stanley Wang
 
Introduction to Map Reduce
Introduction to Map ReduceIntroduction to Map Reduce
Introduction to Map Reduce
Apache Apex
 
Map Reduce
Map ReduceMap Reduce
Map Reduce
Prashant Gupta
 
Introduction to Hadoop Technology
Introduction to Hadoop TechnologyIntroduction to Hadoop Technology
Introduction to Hadoop Technology
Manish Borkar
 
Introduction to Hadoop and Hadoop component
Introduction to Hadoop and Hadoop component Introduction to Hadoop and Hadoop component
Introduction to Hadoop and Hadoop component
rebeccatho
 
Introduction to Hadoop
Introduction to HadoopIntroduction to Hadoop
Introduction to Hadoop
Dr. C.V. Suresh Babu
 
Map reduce presentation
Map reduce presentationMap reduce presentation
Map reduce presentation
ateeq ateeq
 
Sqoop
SqoopSqoop
Big Data and Hadoop
Big Data and HadoopBig Data and Hadoop
Big Data and Hadoop
Flavio Vit
 
Hadoop MapReduce Fundamentals
Hadoop MapReduce FundamentalsHadoop MapReduce Fundamentals
Hadoop MapReduce Fundamentals
Lynn Langit
 
Hive Tutorial | Hive Architecture | Hive Tutorial For Beginners | Hive In Had...
Hive Tutorial | Hive Architecture | Hive Tutorial For Beginners | Hive In Had...Hive Tutorial | Hive Architecture | Hive Tutorial For Beginners | Hive In Had...
Hive Tutorial | Hive Architecture | Hive Tutorial For Beginners | Hive In Had...
Simplilearn
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
Rahul Jain
 
Hadoop Architecture
Hadoop ArchitectureHadoop Architecture
Hadoop Architecture
Dr. C.V. Suresh Babu
 
Introduction to Storm
Introduction to Storm Introduction to Storm
Introduction to Storm
Chandler Huang
 
Apache hive introduction
Apache hive introductionApache hive introduction
Apache hive introduction
Mahmood Reza Esmaili Zand
 
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
Simplilearn
 
MapReduce Example | MapReduce Programming | Hadoop MapReduce Tutorial | Edureka
MapReduce Example | MapReduce Programming | Hadoop MapReduce Tutorial | Edureka MapReduce Example | MapReduce Programming | Hadoop MapReduce Tutorial | Edureka
MapReduce Example | MapReduce Programming | Hadoop MapReduce Tutorial | Edureka
Edureka!
 
Hadoop Tutorial For Beginners
Hadoop Tutorial For BeginnersHadoop Tutorial For Beginners
Hadoop Tutorial For Beginners
Dataflair Web Services Pvt Ltd
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to Redis
Dvir Volk
 

What's hot (20)

Spark SQL
Spark SQLSpark SQL
Spark SQL
 
Hadoop ecosystem
Hadoop ecosystemHadoop ecosystem
Hadoop ecosystem
 
Introduction to Map Reduce
Introduction to Map ReduceIntroduction to Map Reduce
Introduction to Map Reduce
 
Map Reduce
Map ReduceMap Reduce
Map Reduce
 
Introduction to Hadoop Technology
Introduction to Hadoop TechnologyIntroduction to Hadoop Technology
Introduction to Hadoop Technology
 
Introduction to Hadoop and Hadoop component
Introduction to Hadoop and Hadoop component Introduction to Hadoop and Hadoop component
Introduction to Hadoop and Hadoop component
 
Introduction to Hadoop
Introduction to HadoopIntroduction to Hadoop
Introduction to Hadoop
 
Map reduce presentation
Map reduce presentationMap reduce presentation
Map reduce presentation
 
Sqoop
SqoopSqoop
Sqoop
 
Big Data and Hadoop
Big Data and HadoopBig Data and Hadoop
Big Data and Hadoop
 
Hadoop MapReduce Fundamentals
Hadoop MapReduce FundamentalsHadoop MapReduce Fundamentals
Hadoop MapReduce Fundamentals
 
Hive Tutorial | Hive Architecture | Hive Tutorial For Beginners | Hive In Had...
Hive Tutorial | Hive Architecture | Hive Tutorial For Beginners | Hive In Had...Hive Tutorial | Hive Architecture | Hive Tutorial For Beginners | Hive In Had...
Hive Tutorial | Hive Architecture | Hive Tutorial For Beginners | Hive In Had...
 
Introduction to Apache Spark
Introduction to Apache SparkIntroduction to Apache Spark
Introduction to Apache Spark
 
Hadoop Architecture
Hadoop ArchitectureHadoop Architecture
Hadoop Architecture
 
Introduction to Storm
Introduction to Storm Introduction to Storm
Introduction to Storm
 
Apache hive introduction
Apache hive introductionApache hive introduction
Apache hive introduction
 
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
Hadoop Tutorial For Beginners | Apache Hadoop Tutorial For Beginners | Hadoop...
 
MapReduce Example | MapReduce Programming | Hadoop MapReduce Tutorial | Edureka
MapReduce Example | MapReduce Programming | Hadoop MapReduce Tutorial | Edureka MapReduce Example | MapReduce Programming | Hadoop MapReduce Tutorial | Edureka
MapReduce Example | MapReduce Programming | Hadoop MapReduce Tutorial | Edureka
 
Hadoop Tutorial For Beginners
Hadoop Tutorial For BeginnersHadoop Tutorial For Beginners
Hadoop Tutorial For Beginners
 
Introduction to Redis
Introduction to RedisIntroduction to Redis
Introduction to Redis
 

Similar to Mapreduce by examples

Hadoop ecosystem
Hadoop ecosystemHadoop ecosystem
Hadoop ecosystem
Ran Silberman
 
hadoop.ppt
hadoop.ppthadoop.ppt
hadoop.ppt
AnushkaChauhan68
 
Hadoop ecosystem
Hadoop ecosystemHadoop ecosystem
Hadoop ecosystem
Ran Silberman
 
Hadoop 3
Hadoop 3Hadoop 3
Hadoop 2
Hadoop 2Hadoop 2
Hadoop 2
EasyMedico.com
 
Hadoop interview question
Hadoop interview questionHadoop interview question
Hadoop interview question
pappupassindia
 
Hadoop interview questions
Hadoop interview questionsHadoop interview questions
Hadoop interview questions
Kalyan Hadoop
 
MAP REDUCE IN DATA SCIENCE.pptx
MAP REDUCE IN DATA SCIENCE.pptxMAP REDUCE IN DATA SCIENCE.pptx
MAP REDUCE IN DATA SCIENCE.pptx
HARIKRISHNANU13
 
Recommender.system.presentation.pjug.05.20.2014
Recommender.system.presentation.pjug.05.20.2014Recommender.system.presentation.pjug.05.20.2014
Recommender.system.presentation.pjug.05.20.2014
rpbrehm
 
Big-data-analysis-training-in-mumbai
Big-data-analysis-training-in-mumbaiBig-data-analysis-training-in-mumbai
Big-data-analysis-training-in-mumbai
Unmesh Baile
 
Hadoop 31-frequently-asked-interview-questions
Hadoop 31-frequently-asked-interview-questionsHadoop 31-frequently-asked-interview-questions
Hadoop 31-frequently-asked-interview-questions
Asad Masood Qazi
 
Behm Shah Pagerank
Behm Shah PagerankBehm Shah Pagerank
Behm Shah Pagerank
gothicane
 
Hadoop interview questions - Softwarequery.com
Hadoop interview questions - Softwarequery.comHadoop interview questions - Softwarequery.com
Hadoop interview questions - Softwarequery.com
softwarequery
 
Lecture 2 part 3
Lecture 2 part 3Lecture 2 part 3
Lecture 2 part 3
Jazan University
 
Hadoop pig
Hadoop pigHadoop pig
Hadoop pig
Sean Murphy
 
hadoop-spark.ppt
hadoop-spark.ppthadoop-spark.ppt
hadoop-spark.ppt
NouhaElhaji1
 
Stratosphere System Overview Big Data Beers Berlin. 20.11.2013
Stratosphere System Overview Big Data Beers Berlin. 20.11.2013Stratosphere System Overview Big Data Beers Berlin. 20.11.2013
Stratosphere System Overview Big Data Beers Berlin. 20.11.2013
Robert Metzger
 
Getting started with Hadoop, Hive, and Elastic MapReduce
Getting started with Hadoop, Hive, and Elastic MapReduceGetting started with Hadoop, Hive, and Elastic MapReduce
Getting started with Hadoop, Hive, and Elastic MapReduce
obdit
 
Writing MapReduce Programs using Java | Big Data Hadoop Spark Tutorial | Clou...
Writing MapReduce Programs using Java | Big Data Hadoop Spark Tutorial | Clou...Writing MapReduce Programs using Java | Big Data Hadoop Spark Tutorial | Clou...
Writing MapReduce Programs using Java | Big Data Hadoop Spark Tutorial | Clou...
CloudxLab
 
Processing massive amount of data with Map Reduce using Apache Hadoop - Indi...
Processing massive amount of data with Map Reduce using Apache Hadoop  - Indi...Processing massive amount of data with Map Reduce using Apache Hadoop  - Indi...
Processing massive amount of data with Map Reduce using Apache Hadoop - Indi...
IndicThreads
 

Similar to Mapreduce by examples (20)

Hadoop ecosystem
Hadoop ecosystemHadoop ecosystem
Hadoop ecosystem
 
hadoop.ppt
hadoop.ppthadoop.ppt
hadoop.ppt
 
Hadoop ecosystem
Hadoop ecosystemHadoop ecosystem
Hadoop ecosystem
 
Hadoop 3
Hadoop 3Hadoop 3
Hadoop 3
 
Hadoop 2
Hadoop 2Hadoop 2
Hadoop 2
 
Hadoop interview question
Hadoop interview questionHadoop interview question
Hadoop interview question
 
Hadoop interview questions
Hadoop interview questionsHadoop interview questions
Hadoop interview questions
 
MAP REDUCE IN DATA SCIENCE.pptx
MAP REDUCE IN DATA SCIENCE.pptxMAP REDUCE IN DATA SCIENCE.pptx
MAP REDUCE IN DATA SCIENCE.pptx
 
Recommender.system.presentation.pjug.05.20.2014
Recommender.system.presentation.pjug.05.20.2014Recommender.system.presentation.pjug.05.20.2014
Recommender.system.presentation.pjug.05.20.2014
 
Big-data-analysis-training-in-mumbai
Big-data-analysis-training-in-mumbaiBig-data-analysis-training-in-mumbai
Big-data-analysis-training-in-mumbai
 
Hadoop 31-frequently-asked-interview-questions
Hadoop 31-frequently-asked-interview-questionsHadoop 31-frequently-asked-interview-questions
Hadoop 31-frequently-asked-interview-questions
 
Behm Shah Pagerank
Behm Shah PagerankBehm Shah Pagerank
Behm Shah Pagerank
 
Hadoop interview questions - Softwarequery.com
Hadoop interview questions - Softwarequery.comHadoop interview questions - Softwarequery.com
Hadoop interview questions - Softwarequery.com
 
Lecture 2 part 3
Lecture 2 part 3Lecture 2 part 3
Lecture 2 part 3
 
Hadoop pig
Hadoop pigHadoop pig
Hadoop pig
 
hadoop-spark.ppt
hadoop-spark.ppthadoop-spark.ppt
hadoop-spark.ppt
 
Stratosphere System Overview Big Data Beers Berlin. 20.11.2013
Stratosphere System Overview Big Data Beers Berlin. 20.11.2013Stratosphere System Overview Big Data Beers Berlin. 20.11.2013
Stratosphere System Overview Big Data Beers Berlin. 20.11.2013
 
Getting started with Hadoop, Hive, and Elastic MapReduce
Getting started with Hadoop, Hive, and Elastic MapReduceGetting started with Hadoop, Hive, and Elastic MapReduce
Getting started with Hadoop, Hive, and Elastic MapReduce
 
Writing MapReduce Programs using Java | Big Data Hadoop Spark Tutorial | Clou...
Writing MapReduce Programs using Java | Big Data Hadoop Spark Tutorial | Clou...Writing MapReduce Programs using Java | Big Data Hadoop Spark Tutorial | Clou...
Writing MapReduce Programs using Java | Big Data Hadoop Spark Tutorial | Clou...
 
Processing massive amount of data with Map Reduce using Apache Hadoop - Indi...
Processing massive amount of data with Map Reduce using Apache Hadoop  - Indi...Processing massive amount of data with Map Reduce using Apache Hadoop  - Indi...
Processing massive amount of data with Map Reduce using Apache Hadoop - Indi...
 

More from Andrea Iacono

The Pregel Programming Model with Spark GraphX
The Pregel Programming Model with Spark GraphXThe Pregel Programming Model with Spark GraphX
The Pregel Programming Model with Spark GraphX
Andrea Iacono
 
Graphs are everywhere! Distributed graph computing with Spark GraphX
Graphs are everywhere! Distributed graph computing with Spark GraphXGraphs are everywhere! Distributed graph computing with Spark GraphX
Graphs are everywhere! Distributed graph computing with Spark GraphX
Andrea Iacono
 
Real time and reliable processing with Apache Storm
Real time and reliable processing with Apache StormReal time and reliable processing with Apache Storm
Real time and reliable processing with Apache Storm
Andrea Iacono
 
Functional Java 8 in everyday life
Functional Java 8 in everyday lifeFunctional Java 8 in everyday life
Functional Java 8 in everyday life
Andrea Iacono
 
How to build_a_search_engine
How to build_a_search_engineHow to build_a_search_engine
How to build_a_search_engine
Andrea Iacono
 
Machine learning
Machine learningMachine learning
Machine learning
Andrea Iacono
 

More from Andrea Iacono (6)

The Pregel Programming Model with Spark GraphX
The Pregel Programming Model with Spark GraphXThe Pregel Programming Model with Spark GraphX
The Pregel Programming Model with Spark GraphX
 
Graphs are everywhere! Distributed graph computing with Spark GraphX
Graphs are everywhere! Distributed graph computing with Spark GraphXGraphs are everywhere! Distributed graph computing with Spark GraphX
Graphs are everywhere! Distributed graph computing with Spark GraphX
 
Real time and reliable processing with Apache Storm
Real time and reliable processing with Apache StormReal time and reliable processing with Apache Storm
Real time and reliable processing with Apache Storm
 
Functional Java 8 in everyday life
Functional Java 8 in everyday lifeFunctional Java 8 in everyday life
Functional Java 8 in everyday life
 
How to build_a_search_engine
How to build_a_search_engineHow to build_a_search_engine
How to build_a_search_engine
 
Machine learning
Machine learningMachine learning
Machine learning
 

Recently uploaded

ANSYS Mechanical APDL Introductory Tutorials.pdf
ANSYS Mechanical APDL Introductory Tutorials.pdfANSYS Mechanical APDL Introductory Tutorials.pdf
ANSYS Mechanical APDL Introductory Tutorials.pdf
sachin chaurasia
 
Folding Cheat Sheet #7 - seventh in a series
Folding Cheat Sheet #7 - seventh in a seriesFolding Cheat Sheet #7 - seventh in a series
Folding Cheat Sheet #7 - seventh in a series
Philip Schwarz
 
Alluxio Webinar | 10x Faster Trino Queries on Your Data Platform
Alluxio Webinar | 10x Faster Trino Queries on Your Data PlatformAlluxio Webinar | 10x Faster Trino Queries on Your Data Platform
Alluxio Webinar | 10x Faster Trino Queries on Your Data Platform
Alluxio, Inc.
 
₹Call ₹Girls Andheri West 09967584737 Deshi Chori Near You
₹Call ₹Girls Andheri West 09967584737 Deshi Chori Near You₹Call ₹Girls Andheri West 09967584737 Deshi Chori Near You
₹Call ₹Girls Andheri West 09967584737 Deshi Chori Near You
shristi verma
 
What is OCR Technology and How to Extract Text from Any Image for Free
What is OCR Technology and How to Extract Text from Any Image for FreeWhat is OCR Technology and How to Extract Text from Any Image for Free
What is OCR Technology and How to Extract Text from Any Image for Free
TwisterTools
 
@Call @Girls in Surat 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Best High Class Surat Avaulable
 @Call @Girls in Surat 🐱‍🐉  XXXXXXXXXX 🐱‍🐉  Best High Class Surat Avaulable @Call @Girls in Surat 🐱‍🐉  XXXXXXXXXX 🐱‍🐉  Best High Class Surat Avaulable
@Call @Girls in Surat 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Best High Class Surat Avaulable
DiyaSharma6551
 
How we built TryBoxLang in under 48 hours
How we built TryBoxLang in under 48 hoursHow we built TryBoxLang in under 48 hours
How we built TryBoxLang in under 48 hours
Ortus Solutions, Corp
 
Top 10 Tips To Get Google AdSense For Your Website
Top 10 Tips To Get Google AdSense For Your WebsiteTop 10 Tips To Get Google AdSense For Your Website
Top 10 Tips To Get Google AdSense For Your Website
e-Definers Technology
 
NYC 26-Jun-2024 Combined Presentations.pdf
NYC 26-Jun-2024 Combined Presentations.pdfNYC 26-Jun-2024 Combined Presentations.pdf
NYC 26-Jun-2024 Combined Presentations.pdf
AUGNYC
 
@Call @Girls in Solapur 🤷‍♂️ XXXXXXXX 🤷‍♂️ Tanisha Sharma Best High Class S...
 @Call @Girls in Solapur 🤷‍♂️  XXXXXXXX 🤷‍♂️ Tanisha Sharma Best High Class S... @Call @Girls in Solapur 🤷‍♂️  XXXXXXXX 🤷‍♂️ Tanisha Sharma Best High Class S...
@Call @Girls in Solapur 🤷‍♂️ XXXXXXXX 🤷‍♂️ Tanisha Sharma Best High Class S...
Mona Rathore
 
AI Chatbot Development – A Comprehensive Guide  .pdf
AI Chatbot Development – A Comprehensive Guide  .pdfAI Chatbot Development – A Comprehensive Guide  .pdf
AI Chatbot Development – A Comprehensive Guide  .pdf
ayushiqss
 
@Call @Girls in Saharanpur 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Tanisha Sharma Best High Clas...
 @Call @Girls in Saharanpur 🐱‍🐉  XXXXXXXXXX 🐱‍🐉 Tanisha Sharma Best High Clas... @Call @Girls in Saharanpur 🐱‍🐉  XXXXXXXXXX 🐱‍🐉 Tanisha Sharma Best High Clas...
@Call @Girls in Saharanpur 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Tanisha Sharma Best High Clas...
AlinaDevecerski
 
Chennai @Call @Girls 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Genuine WhatsApp Number for Real Meet
Chennai @Call @Girls 🐱‍🐉  XXXXXXXXXX 🐱‍🐉 Genuine WhatsApp Number for Real MeetChennai @Call @Girls 🐱‍🐉  XXXXXXXXXX 🐱‍🐉 Genuine WhatsApp Number for Real Meet
Chennai @Call @Girls 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Genuine WhatsApp Number for Real Meet
lovelykumarilk789
 
Abortion pills in Fujairah *((+971588192166*)☎️)¥) **Effective Abortion Pills...
Abortion pills in Fujairah *((+971588192166*)☎️)¥) **Effective Abortion Pills...Abortion pills in Fujairah *((+971588192166*)☎️)¥) **Effective Abortion Pills...
Abortion pills in Fujairah *((+971588192166*)☎️)¥) **Effective Abortion Pills...
Medical / Health Care (+971588192166) Mifepristone and Misoprostol tablets 200mg
 
@Call @Girls in Ahmedabad 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Best High Class Ahmedabad Ava...
 @Call @Girls in Ahmedabad 🐱‍🐉  XXXXXXXXXX 🐱‍🐉  Best High Class Ahmedabad Ava... @Call @Girls in Ahmedabad 🐱‍🐉  XXXXXXXXXX 🐱‍🐉  Best High Class Ahmedabad Ava...
@Call @Girls in Ahmedabad 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Best High Class Ahmedabad Ava...
DiyaSharma6551
 
dachnug51 - HCL Sametime 12 as a Software Appliance.pdf
dachnug51 - HCL Sametime 12 as a Software Appliance.pdfdachnug51 - HCL Sametime 12 as a Software Appliance.pdf
dachnug51 - HCL Sametime 12 as a Software Appliance.pdf
DNUG e.V.
 
@ℂall @Girls Kolkata ꧁❤ 000000000 ❤꧂@ℂall @Girls Service Vip Top Model Safe
@ℂall @Girls Kolkata  ꧁❤ 000000000 ❤꧂@ℂall @Girls Service Vip Top Model Safe@ℂall @Girls Kolkata  ꧁❤ 000000000 ❤꧂@ℂall @Girls Service Vip Top Model Safe
@ℂall @Girls Kolkata ꧁❤ 000000000 ❤꧂@ℂall @Girls Service Vip Top Model Safe
Misti Soneji
 
Major Outages in Major Enterprises Payara Conference
Major Outages in Major Enterprises Payara ConferenceMajor Outages in Major Enterprises Payara Conference
Major Outages in Major Enterprises Payara Conference
Tier1 app
 
bangalore @Call @Girls 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 WhatsApp Number for Real Meet
bangalore @Call @Girls 🐱‍🐉  XXXXXXXXXX 🐱‍🐉 WhatsApp Number for Real Meetbangalore @Call @Girls 🐱‍🐉  XXXXXXXXXX 🐱‍🐉 WhatsApp Number for Real Meet
bangalore @Call @Girls 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 WhatsApp Number for Real Meet
rajesvigrag
 
Splunk_Remote_Work_Insights_Overview.pptx
Splunk_Remote_Work_Insights_Overview.pptxSplunk_Remote_Work_Insights_Overview.pptx
Splunk_Remote_Work_Insights_Overview.pptx
sudsdeep
 

Recently uploaded (20)

ANSYS Mechanical APDL Introductory Tutorials.pdf
ANSYS Mechanical APDL Introductory Tutorials.pdfANSYS Mechanical APDL Introductory Tutorials.pdf
ANSYS Mechanical APDL Introductory Tutorials.pdf
 
Folding Cheat Sheet #7 - seventh in a series
Folding Cheat Sheet #7 - seventh in a seriesFolding Cheat Sheet #7 - seventh in a series
Folding Cheat Sheet #7 - seventh in a series
 
Alluxio Webinar | 10x Faster Trino Queries on Your Data Platform
Alluxio Webinar | 10x Faster Trino Queries on Your Data PlatformAlluxio Webinar | 10x Faster Trino Queries on Your Data Platform
Alluxio Webinar | 10x Faster Trino Queries on Your Data Platform
 
₹Call ₹Girls Andheri West 09967584737 Deshi Chori Near You
₹Call ₹Girls Andheri West 09967584737 Deshi Chori Near You₹Call ₹Girls Andheri West 09967584737 Deshi Chori Near You
₹Call ₹Girls Andheri West 09967584737 Deshi Chori Near You
 
What is OCR Technology and How to Extract Text from Any Image for Free
What is OCR Technology and How to Extract Text from Any Image for FreeWhat is OCR Technology and How to Extract Text from Any Image for Free
What is OCR Technology and How to Extract Text from Any Image for Free
 
@Call @Girls in Surat 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Best High Class Surat Avaulable
 @Call @Girls in Surat 🐱‍🐉  XXXXXXXXXX 🐱‍🐉  Best High Class Surat Avaulable @Call @Girls in Surat 🐱‍🐉  XXXXXXXXXX 🐱‍🐉  Best High Class Surat Avaulable
@Call @Girls in Surat 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Best High Class Surat Avaulable
 
How we built TryBoxLang in under 48 hours
How we built TryBoxLang in under 48 hoursHow we built TryBoxLang in under 48 hours
How we built TryBoxLang in under 48 hours
 
Top 10 Tips To Get Google AdSense For Your Website
Top 10 Tips To Get Google AdSense For Your WebsiteTop 10 Tips To Get Google AdSense For Your Website
Top 10 Tips To Get Google AdSense For Your Website
 
NYC 26-Jun-2024 Combined Presentations.pdf
NYC 26-Jun-2024 Combined Presentations.pdfNYC 26-Jun-2024 Combined Presentations.pdf
NYC 26-Jun-2024 Combined Presentations.pdf
 
@Call @Girls in Solapur 🤷‍♂️ XXXXXXXX 🤷‍♂️ Tanisha Sharma Best High Class S...
 @Call @Girls in Solapur 🤷‍♂️  XXXXXXXX 🤷‍♂️ Tanisha Sharma Best High Class S... @Call @Girls in Solapur 🤷‍♂️  XXXXXXXX 🤷‍♂️ Tanisha Sharma Best High Class S...
@Call @Girls in Solapur 🤷‍♂️ XXXXXXXX 🤷‍♂️ Tanisha Sharma Best High Class S...
 
AI Chatbot Development – A Comprehensive Guide  .pdf
AI Chatbot Development – A Comprehensive Guide  .pdfAI Chatbot Development – A Comprehensive Guide  .pdf
AI Chatbot Development – A Comprehensive Guide  .pdf
 
@Call @Girls in Saharanpur 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Tanisha Sharma Best High Clas...
 @Call @Girls in Saharanpur 🐱‍🐉  XXXXXXXXXX 🐱‍🐉 Tanisha Sharma Best High Clas... @Call @Girls in Saharanpur 🐱‍🐉  XXXXXXXXXX 🐱‍🐉 Tanisha Sharma Best High Clas...
@Call @Girls in Saharanpur 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Tanisha Sharma Best High Clas...
 
Chennai @Call @Girls 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Genuine WhatsApp Number for Real Meet
Chennai @Call @Girls 🐱‍🐉  XXXXXXXXXX 🐱‍🐉 Genuine WhatsApp Number for Real MeetChennai @Call @Girls 🐱‍🐉  XXXXXXXXXX 🐱‍🐉 Genuine WhatsApp Number for Real Meet
Chennai @Call @Girls 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Genuine WhatsApp Number for Real Meet
 
Abortion pills in Fujairah *((+971588192166*)☎️)¥) **Effective Abortion Pills...
Abortion pills in Fujairah *((+971588192166*)☎️)¥) **Effective Abortion Pills...Abortion pills in Fujairah *((+971588192166*)☎️)¥) **Effective Abortion Pills...
Abortion pills in Fujairah *((+971588192166*)☎️)¥) **Effective Abortion Pills...
 
@Call @Girls in Ahmedabad 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Best High Class Ahmedabad Ava...
 @Call @Girls in Ahmedabad 🐱‍🐉  XXXXXXXXXX 🐱‍🐉  Best High Class Ahmedabad Ava... @Call @Girls in Ahmedabad 🐱‍🐉  XXXXXXXXXX 🐱‍🐉  Best High Class Ahmedabad Ava...
@Call @Girls in Ahmedabad 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 Best High Class Ahmedabad Ava...
 
dachnug51 - HCL Sametime 12 as a Software Appliance.pdf
dachnug51 - HCL Sametime 12 as a Software Appliance.pdfdachnug51 - HCL Sametime 12 as a Software Appliance.pdf
dachnug51 - HCL Sametime 12 as a Software Appliance.pdf
 
@ℂall @Girls Kolkata ꧁❤ 000000000 ❤꧂@ℂall @Girls Service Vip Top Model Safe
@ℂall @Girls Kolkata  ꧁❤ 000000000 ❤꧂@ℂall @Girls Service Vip Top Model Safe@ℂall @Girls Kolkata  ꧁❤ 000000000 ❤꧂@ℂall @Girls Service Vip Top Model Safe
@ℂall @Girls Kolkata ꧁❤ 000000000 ❤꧂@ℂall @Girls Service Vip Top Model Safe
 
Major Outages in Major Enterprises Payara Conference
Major Outages in Major Enterprises Payara ConferenceMajor Outages in Major Enterprises Payara Conference
Major Outages in Major Enterprises Payara Conference
 
bangalore @Call @Girls 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 WhatsApp Number for Real Meet
bangalore @Call @Girls 🐱‍🐉  XXXXXXXXXX 🐱‍🐉 WhatsApp Number for Real Meetbangalore @Call @Girls 🐱‍🐉  XXXXXXXXXX 🐱‍🐉 WhatsApp Number for Real Meet
bangalore @Call @Girls 🐱‍🐉 XXXXXXXXXX 🐱‍🐉 WhatsApp Number for Real Meet
 
Splunk_Remote_Work_Insights_Overview.pptx
Splunk_Remote_Work_Insights_Overview.pptxSplunk_Remote_Work_Insights_Overview.pptx
Splunk_Remote_Work_Insights_Overview.pptx
 

Mapreduce by examples

  • 1. MapReduce by examples The code is available on: https://github.com/andreaiacono/MapReduce Take a look at my blog: https://andreaiacono.blogspot.com/
  • 2. MapReduce by examples MapReduce is a programming model for processing large data sets with a parallel, distributed algorithm on a cluster [src: http://en.wikipedia.org/wiki/MapReduce] What is MapReduce? Originally published in 2004 from Google engineers Jeffrey Dean and Sanjay Ghemawat
  • 3. MapReduce by examples Hadoop is the open source implementation of the model by Apache Software foundation The main project is composed by: - HDFS - YARN - MapReduce Its ecosystem is composed by: - Pig - Hbase - Hive - Impala - Mahout - a lot of other tools
  • 4. MapReduce by examples Hadoop 2.x - YARN: the resource manager, now called YARN, is now detached from mapreduce framework - java packages are under org.apache.hadoop.mapreduce.*
  • 5. MapReduce by examples MapReduce inspiration The name MapReduce comes from functional programming: - map is the name of a higher-order function that applies a given function to each element of a list. Sample in Scala: val numbers = List(1,2,3,4,5) numbers.map(x => x * x) == List(1,4,9,16,25) - reduce is the name of a higher-order function that analyze a recursive data structure and recombine through use of a given combining operation the results of recursively processing its constituent parts, building up a return value. Sample in Scala: val numbers = List(1,2,3,4,5) numbers.reduce(_ + _) == 15 MapReduce takes an input, splits it into smaller parts, execute the code of the mapper on every part, then gives all the results to one or more reducers that merge all the results into one. src: http://en.wikipedia.org/wiki/Map_(higher-order_function) http://en.wikipedia.org/wiki/Fold_(higher-order_function)
  • 7. MapReduce by examples How does Hadoop work? Init - Hadoop divides the input file stored on HDFS into splits (tipically of the size of an HDFS block) and assigns every split to a different mapper, trying to assign every split to the mapper where the split physically resides Mapper - locally, Hadoop reads the split of the mapper line by line - locally, Hadoop calls the method map() of the mapper for every line passing it as the key/value parameters - the mapper computes its application logic and emits other key/value pairs Shuffle and sort - locally, Hadoop's partitioner divides the emitted output of the mapper into partitions, each of those is sent to a different reducer - locally, Hadoop collects all the different partitions received from the mappers and sort them by key Reducer - locally, Hadoop reads the aggregated partitions line by line - locally, Hadoop calls the reduce() method on the reducer for every line of the input - the reducer computes its application logic and emits other key/value pairs - locally, Hadoop writes the emitted pairs output (the emitted pairs) to HDFS
  • 8. MapReduce by examples Simplied flow (for developers)
  • 9. MapReduce by examples Serializable vs Writable - Serializable stores the class name and the object representation to the stream; other instances of the class are referred to by an handle to the class name: this approach is not usable with random access - For the same reason, the sorting needed for the shuffle and sort phase can not be used with Serializable - The deserialization process creates a new instance of the object, while Hadoop needs to reuse objects to minimize computation - Hadoop introduces the two interfaces Writable and WritableComparable that solve these problem
  • 10. MapReduce by examples Writable wrappers Java primitive Writable implementation boolean BooleanWritable byte ByteWritable short ShortWritable int IntWritable VIntWritable float FloatWritable long LongWritable VLongWritable double DoubleWritable Java class Writable implementation String Text byte[] BytesWritable Object ObjectWritable null NullWritable Java collection Writable implementation array ArrayWritable ArrayPrimitiveWritable TwoDArrayWritable Map MapWritable SortedMap SortedMapWritable enum EnumSetWritable
  • 11. MapReduce by examples Implementing Writable: the SumCount class public class SumCount implements WritableComparable<SumCount> { DoubleWritable sum; IntWritable count; public SumCount() { set(new DoubleWritable(0), new IntWritable(0)); } public SumCount(Double sum, Integer count) { set(new DoubleWritable(sum), new IntWritable(count)); } @Override public void write(DataOutput dataOutput) throws IOException { sum.write(dataOutput); count.write(dataOutput); } @Override public void readFields(DataInput dataInput) throws IOException { sum.readFields(dataInput); count.readFields(dataInput); } // getters, setters and Comparable overridden methods are omitted }
  • 12. MapReduce by examples Glossary Term Meaning Job The whole process to execute: the input data, the mapper and reducers execution and the output data Task Every job is divided among the several mappers and reducers; a task is the job portion that goes to every single mapper and reducer Split The input file is split into several splits (the suggested size is the HDFS block size, 64Mb) Record The split is read from mapper by default a line at the time: each line is a record. Using a class extending FileInputFormat, the record can be composed by more than one line Partition The set of all the key-value pairs that will be sent to a single reducer. The default partitioner uses an hash function on the key to determine to which reducer send the data
  • 14. MapReduce by examples WordCount (the Hello World! for MapReduce, available in Hadoop sources) Input Data: The text of the book ”Flatland” By Edwin Abbott. Source: http://www.gutenberg.org/cache/epub/201/pg201.txt We want to count the occurrences of every word of a text file
  • 15. MapReduce by examples public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); @Override public void map(Object key, Text value, Context context) throws IOException, InterruptedException { StringTokenizer itr = new StringTokenizer(value.toString()); while (itr.hasMoreTokens()) { word.set(itr.nextToken().trim()); context.write(word, one); } } } WordCount mapper
  • 16. MapReduce by examples public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable>{ private IntWritable result = new IntWritable(); @Override public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } result.set(sum); context.write(key, result); } } WordCount reducer
  • 17. MapReduce by examples WordCount a 936 ab 6 abbot 3 abbott 2 abbreviated 1 abide 1 ability 1 able 9 ablest 2 abolished 1 abolition 1 about 40 above 22 abroad 1 abrogation 1 abrupt 1 abruptly 1 absence 4 absent 1 absolute 2 ... Results:
  • 18. MapReduce by examples MapReduce testing and debugging - MRUnit is a testing framework based on Junit for unit testing mappers, reducers, combiners (we'll see later what they are) and the combination of the three - Mocking frameworks can be used to mock Context or other Hadoop objects - LocalJobRunner is a class included in Hadoop that let us run a complete Hadoop environment locally, in a single JVM, that can be attached to a debugger. LocalJobRunner can run at most one reducer - Hadoop allows the creation of in-process mini clusters programmatically thanks to MiniDFSCluster and MiniMRCluster testing classes; debugging is more difficult than LocalJobRunner because is multi-threaded and spread over different VMs. Mini Clusters are used for testing Hadoop sources.
  • 19. MapReduce by examples MRUnit test for WordCount @Test public void testMapper() throws Exception { new MapDriver<Object, Text, Text, IntWritable>() .withMapper(new WordCount.TokenizerMapper()) .withInput(NullWritable.get(), new Text("foo bar foo")) .withOutput(new Text("foo"), new IntWritable(1)) .withOutput(new Text("bar"), new IntWritable(1)) .withOutput(new Text("foo"), new IntWritable(1)) .runTest(); } @Test public void testReducer() throws Exception { List<IntWritable> fooValues = new ArrayList<>(); fooValues.add(new IntWritable(1)); fooValues.add(new IntWritable(1)); List<IntWritable> barValue = new ArrayList<>(); barValue.add(new IntWritable(1)); new ReduceDriver<Text, IntWritable, Text, IntWritable>() .withReducer(new WordCount.IntSumReducer()) .withInput(new Text("foo"), fooValues) .withInput(new Text("bar"), barValue) .withOutput(new Text("foo"), new IntWritable(2)) .withOutput(new Text("bar"), new IntWritable(1)) .runTest(); }
  • 20. MapReduce by examples @Test public void testMapReduce() throws Exception { new MapReduceDriver<Object, Text, Text, IntWritable, Text, IntWritable>() .withMapper(new WordCount.TokenizerMapper()) .withInput(NullWritable.get(), new Text("foo bar foo")) .withReducer(new WordCount.IntSumReducer()) .withOutput(new Text("bar"), new IntWritable(1)) .withOutput(new Text("foo"), new IntWritable(2)) .runTest(); } MRUnit test for WordCount
  • 21. MapReduce by examples TopN Input Data: The text of the book ”Flatland” By E. Abbott. Source: http://www.gutenberg.org/cache/epub/201/pg201.txt We want to find the top-n used words of a text file
  • 22. MapReduce by examples public static class TopNMapper extends Mapper<Object, Text, Text, IntWritable> { private final static IntWritable one = new IntWritable(1); private Text word = new Text(); private String tokens = "[_|$#<>^=[]*/,;,.-:()?!"']"; @Override public void map(Object key, Text value, Context context) throws IOException, InterruptedException { String cleanLine = value.toString().toLowerCase().replaceAll(tokens, " "); StringTokenizer itr = new StringTokenizer(cleanLine); while (itr.hasMoreTokens()) { word.set(itr.nextToken().trim()); context.write(word, one); } } } TopN mapper
  • 23. MapReduce by examples public static class TopNReducer extends Reducer<Text, IntWritable, Text, IntWritable> { private Map<Text, IntWritable> countMap = new HashMap<>(); @Override public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } countMap.put(new Text(key), new IntWritable(sum)); } @Override protected void cleanup(Context context) throws IOException, InterruptedException { Map<Text, IntWritable> sortedMap = sortByValues(countMap); int counter = 0; for (Text key: sortedMap.keySet()) { if (counter ++ == 20) { break; } context.write(key, sortedMap.get(key)); } } } TopN reducer
  • 24. MapReduce by examples TopN the 2286 of 1634 and 1098 to 1088 a 936 i 735 in 713 that 499 is 429 you 419 my 334 it 330 as 322 by 317 not 317 or 299 but 279 with 273 for 267 be 252 ... Results:
  • 25. MapReduce by examples TopN In the shuffle and sort phase, the partioner will send every single word (the key) with the value ”1” to the reducers. All these network transmissions can be minimized if we reduce locally the data that the mapper will emit. This is obtained by a Combiner.
  • 26. MapReduce by examples public static class Combiner extends Reducer<Text, IntWritable, Text, IntWritable> { @Override public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } context.write(key, new IntWritable(sum)); } } TopN combiner
  • 27. MapReduce by examples TopN Without combiner Map input records=4239 Map output records=37817 Map output bytes=359621 Input split bytes=118 Combine input records=0 Combine output records=0 Reduce input groups=4987 Reduce shuffle bytes=435261 Reduce input records=37817 Reduce output records=20 Hadoop output With combiner Map input records=4239 Map output records=37817 Map output bytes=359621 Input split bytes=116 Combine input records=37817 Combine output records=20 Reduce input groups=20 Reduce shuffle bytes=194 Reduce input records=20 Reduce output records=20
  • 28. MapReduce by examples Combiners If the function computed is - commutative [a + b = b + a] - associative [a + (b + c) = (a + b) + c] we can reuse the reducer as a combiner! Max function works: max (max(a,b), max(c,d,e)) = max (a,b,c,d,e) Mean function does not work: mean(mean(a,b), mean(c,d,e)) != mean(a,b,c,d,e)
  • 29. MapReduce by examples Combiners Advantages of using combiners - Network transmissions are minimized Disadvantages of using combiners - Hadoop does not guarantee the execution of a combiner: it can be executed 0, 1 or multiple times on the same input - Key-value pairs emitted from mapper are stored in local filesystem, and the execution of the combiner could cause expensive IO operations
  • 30. MapReduce by examples private Map<String, Integer> countMap = new HashMap<>(); private String tokens = "[_|$#<>^=[]*/,;,.-:()?!"']"; @Override public void map(Object key, Text value, Context context) throws IOException, InterruptedException { String cleanLine = value.toString().toLowerCase().replaceAll(tokens, " "); StringTokenizer itr = new StringTokenizer(cleanLine); while (itr.hasMoreTokens()) { String word = itr.nextToken().trim(); if (countMap.containsKey(word)) { countMap.put(word, countMap.get(word)+1); } else { countMap.put(word, 1); } } } @Override protected void cleanup(Context context) throws InterruptedException { for (String key: countMap.keySet()) { context.write(new Text(key), new IntWritable(countMap.get(key))); } } TopN in-mapper combiner
  • 31. MapReduce by examples private Map<Text, IntWritable> countMap = new HashMap<>(); @Override public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int sum = 0; for (IntWritable val : values) { sum += val.get(); } countMap.put(new Text(key), new IntWritable(sum)); } @Override protected void cleanup(Context context) throws InterruptedException { Map<Text, IntWritable> sortedMap = sortByValues(countMap); int counter = 0; for (Text key: sortedMap.keySet()) { if (counter ++ == 20) { break; } context.write(key, sortedMap.get(key)); } } TopN in-mapper reducer
  • 32. MapReduce by examples Combiners - output Without combiner Map input records=4239 Map output records=37817 Map output bytes=359621 Input split bytes=118 Combine input records=0 Combine output records=0 Reduce input groups=4987 Reduce shuffle bytes=435261 Reduce input records=37817 Reduce output records=20 With combiner Map input records=4239 Map output records=37817 Map output bytes=359621 Input split bytes=116 Combine input records=37817 Combine output records=20 Reduce input groups=20 Reduce shuffle bytes=194 Reduce input records=20 Reduce output records=20 With in-mapper Map input records=4239 Map output records=4987 Map output bytes=61522 Input split bytes=118 Combine input records=0 Combine output records=0 Reduce input groups=4987 Reduce shuffle bytes=71502 Reduce input records=4987 Reduce output records=20 With in-mapper and combiner Map input records=4239 Map output records=4987 Map output bytes=61522 Input split bytes=116 Combine input records=4987 Combine output records=20 Reduce input groups=20 Reduce shuffle bytes=194 Reduce input records=20 Reduce output records=20
  • 33. MapReduce by examples Mean Input Data: Temperature in Milan (DDMMYYY, MIN, MAX) 01012000, -4.0, 5.0 02012000, -5.0, 5.1 03012000, -5.0, 7.7 … 29122013, 3.0, 9.0 30122013, 0.0, 9.8 31122013, 0.0, 9.0 We want to find the mean max temperature for every month Data source: http://archivio-meteo.distile.it/tabelle-dati-archivio-meteo/
  • 34. MapReduce by examples Mean mapper private Map<String, List<Double>> maxMap = new HashMap<>(); @Override public void map(Object key, Text value, Context context) throws IOException, InterruptedException { String[] values = value.toString().split((",")); if (values.length != 3) return; String date = values[DATE]; Text month = new Text(date.substring(2)); Double max = Double.parseDouble(values[MAX]); if (!maxMap.containsKey(month)) { maxMap.put(month, new ArrayList<Double>()); } maxMap.get(month).add(max); } @Override protected void cleanup(Mapper.Context context) throws InterruptedException { for (Text month: maxMap.keySet()) { List<Double> temperatures = maxMap.get(month); Double sum = 0d; for (Double max: temperatures) { sum += max; } context.write(month, new DoubleWritable(sum)); } }
  • 35. MapReduce by examples Is this correct? Mean mapper private Map<String, List<Double>> maxMap = new HashMap<>(); @Override public void map(Object key, Text value, Context context) throws IOException, InterruptedException { String[] values = value.toString().split((",")); if (values.length != 3) return; String date = values[DATE]; Text month = new Text(date.substring(2)); Double max = Double.parseDouble(values[MAX]); if (!maxMap.containsKey(month)) { maxMap.put(month, new ArrayList<Double>()); } maxMap.get(month).add(max); } @Override protected void cleanup(Mapper.Context context) throws InterruptedException { for (Text month: maxMap.keySet()) { List<Double> temperatures = maxMap.get(month); Double sum = 0d; for (Double max: temperatures) { sum += max; } context.write(month, new DoubleWritable(sum)); } }
  • 36. MapReduce by examples Mean Mapper #1: lines 1, 2 Mapper #2: lines 3, 4, 5 Mapper#1: mean = (10.0 + 20.0) / 2 = 15.0 Mapper#2: mean = (2.0 + 4.0 + 3.0) / 3 = 3.0 Reducer mean = (15.0 + 3.0) / 2 = 9.0 But the correct mean is: (10.0 + 20.0 + 2.0 + 4.0 + 3.0) / 5 = 7.8 Sample input data: 01012000, 0.0, 10.0 02012000, 0.0, 20.0 03012000, 0.0, 2.0 04012000, 0.0, 4.0 05012000, 0.0, 3.0 Not correct!
  • 37. MapReduce by examples private Map<Text, List<Double>> maxMap = new HashMap<>(); @Override public void map(Object key, Text value, Context context) throws IOException, InterruptedException { String[] values = value.toString().split((",")); if (values.length != 3) return; String date = values[DATE]; Text month = new Text(date.substring(2)); Double max = Double.parseDouble(values[MAX]); if (!maxMap.containsKey(month)) { maxMap.put(month, new ArrayList<Double>()); } maxMap.get(month).add(max); } @Override protected void cleanup(Context context) throws InterruptedException { for (Text month: maxMap.keySet()) { List<Double> temperatures = maxMap.get(month); Double sum = 0d; for (Double max: temperatures) sum += max; context.write(month, new SumCount(sum, temperatures.size())); } } Mean mapper This is correct!
  • 38. MapReduce by examples private Map<Text, SumCount> sumCountMap = new HashMap<>(); @Override public void reduce(Text key, Iterable<SumCount> values, Context context) throws IOException, InterruptedException { SumCount totalSumCount = new SumCount(); for (SumCount sumCount : values) { totalSumCount.addSumCount(sumCount); } sumCountMap.put(new Text(key), totalSumCount); } @Override protected void cleanup(Context context) throws InterruptedException { for (Text month: sumCountMap.keySet()) { double sum = sumCountMap.get(month).getSum().get(); int count = sumCountMap.get(month).getCount().get(); context.write(month, new DoubleWritable(sum/count)); } } Mean reducer
  • 39. MapReduce by examples Mean 022012 7.230769230769231 022013 7.2 022010 7.851851851851852 022011 9.785714285714286 032013 10.741935483870968 032010 13.133333333333333 032012 18.548387096774192 032011 13.741935483870968 022003 9.278571428571428 022004 10.41034482758621 022005 9.146428571428572 022006 8.903571428571428 022000 12.344444444444441 022001 12.164285714285715 022002 11.839285714285717 ... Results:
  • 40. MapReduce by examples Mean Result: R code to plot data: temp <- read.csv(file="results.txt", sep="t", header=0) names(temp) <- c("date","temperature") ym <- as.yearmon(temp$date, format = "%m-%Y"); year <- format(ym, "%Y") month <- format(ym, "%m") ggplot(temp, aes(x=month, y=temperature, group=year)) + geom_line(aes(colour = year))
  • 41. MapReduce by examples Join Input Data - Users file: "user_ptr_id" "reputation" "gold" "silver" "bronze" "100006402" "18" "0" "0" "0" "100022094" "6354" "4" "12" "50" "100018705" "76" "0" "3" "4" … Input Data - Posts file: "id" "title" "tagnames" "author_id" "body" "node_type" "parent_id" "abs_parent_id" "added_at" "score" … "5339" "Whether pdf of Unit and Homework is available?" "cs101 pdf" "100000458" "" "question" "N" "N" "2012-02-25 08:09:06.787181+00" "1" "2312" "Feedback on Audio Quality" "cs101 production audio" "100005361" "<p>We are looking for feedback on the audio in our videos. Tell us what you think and try to be as <em>specific</em> as possible.</p>" "question" "N" "N" "2012-02-23 00:28:02.321344+00" "2" "2741" "where is the sample page for homework?" "cs101 missing_info homework" "100001178" "<p>I am sorry if I am being a nob ... but I do not seem to find any information regarding the sample page reffered to on the 1 question of homework 1." "question" "N" "N" "2012-02-23 09:15:02.270861+00" "0" ... We want to combine information from the users file with Information from the posts file (a join) Data source: http://content.udacity-data.com/course/hadoop/forum_data.tar.gz
  • 42. MapReduce by examples @Override public void map(Object key, Text value, Context context) throws IOException, InterruptedException { FileSplit fileSplit = (FileSplit) context.getInputSplit(); String filename = fileSplit.getPath().getName(); String[] fields = value.toString().split(("t")); if (filename.equals("forum_nodes_no_lf.tsv")) { if (fields.length > 5) { String authorId = fields[3].substring(1, fields[3].length() - 1); String type = fields[5].substring(1, fields[5].length() - 1); if (type.equals("question")) { context.write(new Text(authorId), one); } } } else { String authorId = fields[0].substring(1, fields[0].length() - 1); String reputation = fields[1].substring(1, fields[1].length() - 1); try { int reputationValue = Integer.parseInt(reputation) + 2; context.write(new Text(authorId),new IntWritable(reputationValue)); } catch (NumberFormatException nfe) { // just skips this record } } } Join mapper
  • 43. MapReduce by examples @Override public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException { int postsNumber = 0; int reputation = 0; String authorId = key.toString(); for (IntWritable value : values) { int intValue = value.get(); if (intValue == 1) { postsNumber ++; } else { reputation = intValue -2; } } context.write(new Text(authorId), new Text(reputation + "t" + postsNumber)); } Join reducer
  • 44. MapReduce by examples Join USER_ID REPUTATION SCORE 00081537 1019 3 100011949 12 1 100105405 36 1 100000628 60 2 100011948 231 1 100000629 2090 1 100000623 1 2 100011945 457 4 100000624 167 1 100011944 114 3 100000625 1 1 100000626 93 1 100011942 11 1 100000620 1 1 100011940 35 1 100000621 2 1 100080016 11 2 100080017 53 1 100081549 1 1 ... Results:
  • 45. MapReduce by examples Join Result: R code to plot data: users <- read.csv(file="part-r-00000",sep='t', header=0) users$V2[which(users$V2 > 10000,)] <- 0 plot(users$V2, users$V3, xlab="Reputation", ylab="Number of posts", pch=19, cex=0.4)
  • 46. MapReduce by examples K-means Input Data: A random set of points 2.2705 0.9178 1.8600 2.1002 2.0915 1.3679 -0.1612 0.8481 -1.2006 -1.0423 1.0622 0.3034 0.5138 2.5542 ... We want to aggregate 2D points in clusters using K-means algorithm R code to generate dataset: N <- 100 x <- rnorm(N)+1; y <- rnorm(N)+1; dat <- data.frame(x, y) x <- rnorm(N)+5; y <- rnorm(N)+1; dat <- rbind(dat, data.frame(x, y)) x <- rnorm(N)+1; y <- rnorm(N)+5; dat <- rbind(dat, data.frame(x, y))
  • 48. MapReduce by examples K-means mapper @Override protected void setup(Context context) throws IOException, InterruptedException { URI[] cacheFiles = context.getCacheFiles(); centroids = Utils.readCentroids(cacheFiles[0].toString()); } @Override public void map(Object key, Text value, Context context) throws IOException, InterruptedException { String[] xy = value.toString().split(" "); double x = Double.parseDouble(xy[0]); double y = Double.parseDouble(xy[1]); int index = 0; double minDistance = Double.MAX_VALUE; for (int j = 0; j < centroids.size(); j++) { double cx = centroids.get(j)[0]; double cy = centroids.get(j)[1]; double distance = Utils.euclideanDistance(cx, cy, x, y); if (distance < minDistance) { index = j; minDistance = distance; } } context.write(new IntWritable(index), value); }
  • 49. MapReduce by examples K-means reducer public class KMeansReducer extends Reducer<IntWritable, Text, Text, IntWritable> { @Override protected void reduce(IntWritable key, Iterable<Text> values, Context context) throws IOException, InterruptedException { Double mx = 0d; Double my = 0d; int counter = 0; for (Text value: values) { String[] temp = value.toString().split(" "); mx += Double.parseDouble(temp[0]); my += Double.parseDouble(temp[1]); counter ++; } mx = mx / counter; my = my / counter; String centroid = mx + " " + my; context.write(new Text(centroid), key); } }
  • 50. MapReduce by examples K-means driver - 1public static void main(String[] args) throws Exception { Configuration configuration = new Configuration(); String[] otherArgs = new GenericOptionsParser(configuration, args).getRemainingArgs(); if (otherArgs.length != 3) { System.err.println("Usage: KMeans <in> <out> <clusters_number>"); System.exit(2); } int centroidsNumber = Integer.parseInt(otherArgs[2]); configuration.setInt(Constants.CENTROID_NUMBER_ARG, centroidsNumber); configuration.set(Constants.INPUT_FILE, otherArgs[0]); List<Double[]> centroids = Utils.createRandomCentroids(centroidsNumber); String centroidsFile = Utils.getFormattedCentroids(centroids); Utils.writeCentroids(configuration, centroidsFile); boolean hasConverged = false; int iteration = 0; do { configuration.set(Constants.OUTPUT_FILE, otherArgs[1] + "-" + iteration); if (!launchJob(configuration)) { System.exit(1); } String newCentroids = Utils.readReducerOutput(configuration); if (centroidsFile.equals(newCentroids)) { hasConverged = true; } else { Utils.writeCentroids(configuration, newCentroids); } centroidsFile = newCentroids; iteration++; } while (!hasConverged); writeFinalData(configuration, Utils.getCentroids(centroidsFile)); }
  • 51. MapReduce by examples private static boolean launchJob(Configuration config) { Job job = Job.getInstance(config); job.setJobName("KMeans"); job.setJarByClass(KMeans.class); job.setMapperClass(KMeansMapper.class); job.setReducerClass(KMeansReducer.class); job.setMapOutputKeyClass(IntWritable.class); job.setMapOutputValueClass(Text.class); job.setNumReduceTasks(1); job.addCacheFile(new Path(Constants.CENTROIDS_FILE).toUri()); FileInputFormat.addInputPath(job, new Path(config.get(Constants.INPUT_FILE))); FileOutputFormat.setOutputPath(job, new Path(config.get(Constants.OUTPUT_FILE))); return job.waitForCompletion(true); } K-means driver - 2
  • 52. MapReduce by examples K-means Results: 4.5700 0.5510 2 4.5179 0.6120 2 4.1978 1.5706 2 5.2358 1.7982 2 1.747 3.9052 0 1.0445 5.0108 0 -0.6105 4.7576 0 0.7108 2.8032 1 1.3450 3.9558 0 1.2272 4.9238 0 ... R code to plot data: points <- read.csv(file="final-data", sep="t", header=0) colnames(points)[1] <- "x" colnames(points)[2] <- "y" plot(points$x, points$y, col= points$V3+2)
  • 53. MapReduce by examples Hints - Use MapReduce only if you have really big data: SQL or scripting are less expensive in terms of time needed to obtain the same results - Use a lot of defensive checks: when we have a lot of data, we don't want the computation to be stopped by a trivial NPE :-) - Testing can save a lot of time!
  • 54. Thanks! The code is available on: https://github.com/andreaiacono/MapReduce Take a look at my blog: https://andreaiacono.blogspot.com/