Spark Api Master
Spark Api Master
Spark Api Master
Contents
1 Preface
2 Shell Configuration
2.1 Adjusting the amount of memory . . . . . . . . . . . . . . . . . . . . . . .
2.2 Adjusting the number of worker threads . . . . . . . . . . . . . . . . . . .
2.3 Adding a Listener to the Logging System . . . . . . . . . . . . . . . . . .
6
6
6
6
3 The
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
3.10
3.11
3.12
3.13
3.14
3.15
3.16
3.17
3.18
3.19
3.20
3.21
3.22
3.23
3.24
3.25
3.26
3.27
3.28
RDD API
aggregate . . . . . . . . . . . .
cartesian . . . . . . . . . . . . .
checkpoint . . . . . . . . . . . .
coalesce, repartition . . . . . .
cogroup[Pair] , groupWith[Pair] .
collect, toArray . . . . . . . . .
collectAsMap[Pair] . . . . . . .
combineByKey[Pair] . . . . . .
compute . . . . . . . . . . . . .
context, sparkContext . . . . .
count . . . . . . . . . . . . . . .
countApprox . . . . . . . . . .
countByKey[Pair] . . . . . . . .
countByKeyApprox[Pair] . . . .
countByValue . . . . . . . . . .
countByValueApprox . . . . . .
countApproxDistinct . . . . . .
countApproxDistinctByKey[Pair]
dependencies . . . . . . . . . .
distinct . . . . . . . . . . . . .
first . . . . . . . . . . . . . . .
filter . . . . . . . . . . . . . . .
filterWith . . . . . . . . . . . .
flatMap . . . . . . . . . . . . .
flatMapValues[Pair] . . . . . . .
flatMapWith . . . . . . . . . .
fold . . . . . . . . . . . . . . . .
foldByKey[Pair] . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
7
8
10
10
11
11
12
12
13
13
14
14
14
14
15
15
15
16
16
17
17
18
18
19
20
20
21
21
22
3.29
3.30
3.31
3.32
3.33
3.34
3.35
3.36
3.37
3.38
3.39
3.40
3.41
3.42
3.43
3.44
3.45
3.46
3.47
3.48
3.49
3.50
3.51
3.52
3.53
3.54
3.55
3.56
3.57
3.58
3.59
3.60
3.61
3.62
3.63
3.64
3.65
3.66
3.67
3.68
3.69
3.70
3.71
3.72
foreach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
foreachPartition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
foreachWith . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
generator, setGenerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
getCheckpointFile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
preferredLocations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
getStorageLevel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
glom . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
groupBy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
groupByKey[Pair] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
histogram[Double] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
id . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
isCheckpointed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
iterator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
join[Pair] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
keyBy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
keys[Pair] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
leftOuterJoin[Pair] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
lookup[Pair] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
mapPartitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
mapPartitionsWithContext . . . . . . . . . . . . . . . . . . . . . . . . . . 32
mapPartitionsWithIndex . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
mapPartitionsWithSplit . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
mapValues[Pair] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
mapWith . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
mean[Double] , meanApprox[Double] . . . . . . . . . . . . . . . . . . . . . . . 35
name, setName . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
partitionBy[Pair] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
partitioner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
partitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
persist, cache . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
pipe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
reduce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
reduceByKey[Pair] , reduceByKeyLocally[Pair] , reduceByKeyToDriver[Pair] 37
rightOuterJoin[Pair] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
sample . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
saveAsHadoopFile[Pair] , saveAsHadoopDataset[Pair] , saveAsNewAPIHadoopFile[Pair] 39
saveAsObjectFile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
saveAsSequenceFile[SeqFile] . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
saveAsTextFile . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
stats[Double] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
sortByKey[Ordered] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
stdev[Double] , sampleStdev[Double] . . . . . . . . . . . . . . . . . . . . . . . 42
3.73
3.74
3.75
3.76
3.77
3.78
3.79
3.80
3.81
3.82
3.83
3.84
3.85
3.86
3.87
3.88
subtract . . . . . . . . . . . . . . . . .
subtractByKey[Pair] . . . . . . . . . .
sum[Double] , sumApprox[Double] . . . .
take . . . . . . . . . . . . . . . . . . .
takeOrdered . . . . . . . . . . . . . . .
takeSample . . . . . . . . . . . . . . .
toDebugString . . . . . . . . . . . . .
toJavaRDD . . . . . . . . . . . . . . .
top . . . . . . . . . . . . . . . . . . . .
toString . . . . . . . . . . . . . . . . .
union, ++ . . . . . . . . . . . . . . . .
unpersist . . . . . . . . . . . . . . . .
values[Pair] . . . . . . . . . . . . . . .
variance[Double] , sampleVariance[Double]
zip . . . . . . . . . . . . . . . . . . . .
zipParititions . . . . . . . . . . . . . .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
. .
. .
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
43
43
44
44
45
45
46
46
46
47
47
47
48
48
48
49
4 Further Topics
51
4.1 Reading from HDFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
1 Preface
Spark is an advanced open-source cluster computing system that is capable of handling
extremely large data sets. It was first published by ? and its popularity has increased
ever since. Due to its real-time properties and efficient usage of resources, Spark has
become a very popular alternative to well established computational software for big
data.
Spark is still actively being maintained and further developed by its original creators
from UC Berkeley. Hence, this command reference and the associated, including the
code-snippets and sample outputs outputs shown, should be considered as a overview
of the status-quo of this amazing piece of software technology. Specifically, the API
examples in this document are for Spark version 0.9. However, we do not expect the
API to change much in future releases.
This document does not cover any installation or distribution related topics. For
installation instructions, please refer to the Apache Spark website.
2 Shell Configuration
One of the strongest features of Spark is its shell. The Spark-Shell allows users to type
and execute commands in a Unix-Terminal-like fashion. The preferred language to use
is probably Scala, which is actually a heavily modified Java dialect that enhances the
language with many features and concepts of functional programming languages. Below
are just a few of the more useful Spark-Shell configuration parameters.
In the above example we are setting the maximum amount of memory to 1 GB.
If you run Spark in local mode you can also set the number of worker threads in one
setting as follows:
export MASTER = local [32]
3.1 aggregate
The aggregate-method provides an interface for performing highly customized reductions
and aggregations with a RDD. However, due to the way Scala and Spark execute and
process data, care must be taken to achieve deterministic behavior. The following list
contains a few observations we made while experimenting with aggregate:
The reduce and combine functions have to be commutative and associative.
As can be seen from the function definition below, the output of the combiner must
be equal to its input. This is necessary because Spark will chain-execute it.
The zero value is the initial value of the U component when either seqOp or combOp
are executed for the first element of their domain of influence. Depending on what
you want to achieve, you may have to change it. However, to make your code
deterministic, make sure that your code will yield the same result regardless of the
number or size of partitions.
Do not assume any execution order for either partition computations or combining
partitions.
The neutral zeroValue is applied at the beginning of each sequence of reduces
within the individual partitions and again when the output of separate partitions
is combined.
Why have two separate combine functions? The first functions maps the input
values into the result space. Note that the aggregation data type (1st input and
output) can be different (U 6= T ). The second function reduces these mapped
values in the result space.
Why would one want to use two input data types? Let us assume we do an archaeological site survey using a metal detector. While walking through the site
we take GPS coordinates of important findings based on the output of the metal
detector. Later, we intend to draw an image of a map that highlights these locations using the aggregate function. In this case the zeroValue could be an area
map with no highlights. The possibly huge set of input data is stored as GPS
coordinates across many partitions. seqOp could convert the GPS coordinates to
map coordinates and put a marker on the map at the respective position. combOp
will receive these highlights as partial maps and combine them into a single final
output map.
Listing 3.1: Variants
def aggregate [ U : ClassTag ]( zeroValue : U ) ( seqOp : (U , T ) = > U , combOp : (U
, U) => U): U
The main issue with the code above is that the result of the inner min is a string of
length 1. The zero in the output is due to the empty string being the last string in the
list. We see this result because we are not recursively reducing any further within the
partition for the final string.
Listing 3.3: Examples 2
val z = sc . parallelize ( List ("12" ,"23" ,"" ,"345") ,2)
z . aggregate ("") (( x , y ) = > math . min ( x . length , y . length ) . toString , (x , y )
=> x + y)
res144 : String = 11
In contrast to the previous example, this example has the empty string at the beginning
of the second partition. This results in length of zero being input to the second reduce
which then upgrades it a length of 1. (Warning: The above example shows bad design
since the output is dependent on the order of the data inside the partitions.)
3.2 cartesian
Computes the cartesian product between two RDDs (i.e. Each item of the first RDD is
joined with each item of the second RDD) and returns them as a new RDD. (Warning:
Be careful when using this function.! Memory consumption can quickly become an issue!)
Listing 3.4: Variants
def cartesian [ U : ClassTag ]( other : RDD [ U ]) : RDD [( T , U ) ]
3.3 checkpoint
Will create a checkpoint when the RDD is computed next. Checkpointed RDDs are
stored as a binary file within the checkpoint directory which can be specified using the
Spark context. (Warning: Spark applies lazy evaluation. Checkpointing will not occur
until an action is invoked.)
Important note: the directory my directory name should exist in all slaves. As an
alternative you could use an HDFS directory URL as well.
Listing 3.6: Variants
def checkpoint ()
10
11
b . cogroup ( c ) . collect
res7 : Array [( Int , ( Seq [ String ] , Seq [ String ]) ) ] = Array (
(2 ,( ArrayBuffer ( b ) , ArrayBuffer ( c ) ) ) ,
(3 ,( ArrayBuffer ( b ) , ArrayBuffer ( c ) ) ) ,
(1 ,( ArrayBuffer (b , b ) , ArrayBuffer (c , c ) ) )
)
val d = a . map (( _ , " d ") )
b . cogroup (c , d ) . collect
res9 : Array [( Int , ( Seq [ String ] , Seq [ String ] , Seq [ String ]) ) ] = Array (
(2 ,( ArrayBuffer ( b ) , ArrayBuffer ( c ) , ArrayBuffer ( d ) ) ) ,
(3 ,( ArrayBuffer ( b ) , ArrayBuffer ( c ) , ArrayBuffer ( d ) ) ) ,
(1 ,( ArrayBuffer (b , b ) , ArrayBuffer (c , c ) , ArrayBuffer (d , d ) ) )
)
val x = sc . parallelize ( List ((1 , " apple ") , (2 , " banana ") , (3 , " orange ") ,
(4 , " kiwi ") ) , 2)
val y = sc . parallelize ( List ((5 , " computer ") , (1 , " laptop ") , (1 , "
desktop ") , (4 , " iPad ") ) , 2)
x . cogroup ( y ) . collect
res23 : Array [( Int , ( Seq [ String ] , Seq [ String ]) ) ] = Array (
(4 ,( ArrayBuffer ( kiwi ) , ArrayBuffer ( iPad ) ) ) ,
(2 ,( ArrayBuffer ( banana ) , ArrayBuffer () ) ) ,
(3 ,( ArrayBuffer ( orange ) , ArrayBuffer () ) ) ,
(1 ,( ArrayBuffer ( apple ) , ArrayBuffer ( laptop , desktop ) ) ) ,
(5 ,( ArrayBuffer () , ArrayBuffer ( computer ) ) ) )
3.7 collectAsMap[Pair]
Similar to collect, but works on key-value RDDs and converts them into Scala maps to
preserve their key-value structure.
12
3.8 combineByKey[Pair]
Very efficient implementation that combines the values of a RDD consisting of twocomponent tuples by applying multiple aggregators one after another.
Listing 3.16: Variants
def combineByKey [ C ]( createCombiner : V = > C , mergeValue : (C , V ) = > C ,
mergeCombiners : (C , C ) = > C ) : RDD [( K , C ) ]
def combineByKey [ C ]( createCombiner : V = > C , mergeValue : (C , V ) = > C ,
mergeCombiners : (C , C ) = > C , numPartitions : Int ) : RDD [( K , C ) ]
def combineByKey [ C ]( createCombiner : V = > C , mergeValue : (C , V ) = > C ,
mergeCombiners : (C , C ) = > C , partitioner : Partitioner ,
mapSideCombine : Boolean = true , serializerClass : String = null ) :
RDD [( K , C ) ]
3.9 compute
Executes dependencies and computes the actual representation of the RDD. This function should not be called directly by users.
Listing 3.18: Variants
def compute ( split : Partition , context : TaskContext ) : Iterator [ T ]
13
3.11 count
Returns the number of items stored within a RDD.
Listing 3.21: Variants
def count () : Long
3.12 countApprox
Marked as experimental feature! Experimental features are currently not covered by this
document!
Listing 3.23: Variants
def ( timeout : Long , confidence : Double = 0.95) : PartialResult [
BoundedDouble ]
3.13 countByKey[Pair]
Very similar to count, but counts the values of a RDD consisting of two-component
tuples for each distinct key separately.
Listing 3.24: Variants
def countByKey () : Map [K , Long ]
14
3.14 countByKeyApprox[Pair]
Marked as experimental feature! Experimental features are currently not covered by this
document!
Listing 3.26: Variants
def countByKeyApprox ( timeout : Long , confidence : Double = 0.95) :
PartialResult [ Map [K , BoundedDouble ]]
3.15 countByValue
Returns a map that contains all unique values of the RDD and their respective occurrence
counts. (Warning: This operation will finally aggregate the information in a single
reducer!)
Listing 3.27: Variants
def countByValue () : Map [T , Long ]
3.16 countByValueApprox
Marked as experimental feature! Experimental features are currently not covered by this
document!
Listing 3.29: Variants
def countByValueApprox ( timeout : Long , confidence : Double = 0.95) :
PartialResult [ Map [T , BoundedDouble ]]
15
3.17 countApproxDistinct
Computes the approximate number of distinct values. For large RDDs which are spread
across many nodes, this function may execute faster than other counting methods. The
parameter relativeSD controls the accuracy of the computation.
Listing 3.30: Variants
def countApproxDistinct ( relativeSD : Double = 0.05) : Long
3.18 countApproxDistinctByKey[Pair]
Similar to countApproxDistinct, but computes the approximate number of distinct values
for each distinct key. Hence, the RDD must consist of two-component tuples. For large
RDDs which are spread across many nodes, this function may execute faster than other
counting methods. The parameter relativeSD controls the accuracy of the computation.
Listing 3.32: Variants
def c o un t A p pr o x D is t i n ct B y K ey ( relativeSD : Double = 0.05) : RDD [( K , Long ) ]
def c o un t A p pr o x D is t i n ct B y K ey ( relativeSD : Double , numPartitions : Int ) :
RDD [( K , Long ) ]
def c o un t A p pr o x D is t i n ct B y K ey ( relativeSD : Double , partitioner :
Partitioner ) : RDD [( K , Long ) ]
16
d . c o u nt A p p ro x D i st i n c tB y K e y (0.01) . collect
res16 : Array [( String , Long ) ] = Array (( Rat ,2555) , ( Cat ,2455) , ( Dog ,2425)
, ( Gnu ,2513) )
d . c o u nt A p p ro x D i st i n c tB y K e y (0.001) . collect
res0 : Array [( String , Long ) ] = Array (( Rat ,2562) , ( Cat ,2464) , ( Dog ,2451) ,
( Gnu ,2521) )
3.19 dependencies
Returns the RDD on which this RDD depends.
Listing 3.34: Variants
final def dependencies : Seq [ Dependency [ _ ]]
3.20 distinct
Returns a new RDD that contains each unique value only once.
Listing 3.36: Variants
def distinct () : RDD [ T ]
def distinct ( numPartitions : Int ) : RDD [ T ]
17
3.21 first
Looks for the very first data item of the RDD and returns it.
Listing 3.38: Variants
def first () : T
3.22 filter
Evaluates a boolean function for each data item of the RDD and puts the items for
which the function returned true into the resulting RDD.
Listing 3.40: Variants
def filter ( f : T = > Boolean ) : RDD [ T ]
When you provide a filter function, it must be able to handle all data items contained
in the RDD. Scala provides so-called partial functions to deal with mixed data-types.
(Tip: Partial functions are very useful if you have some data which may be bad and
you do not want to handle but for the good data (matching data) you want to apply
some kind of map function. The following article is good. It teaches you about partial
functions in a very nice way and explains why case has to be used for partial functions:
http://blog.bruchez.name/2011/10/scala-partial-functions-without-phd.html)
Listing 3.42: Examples for mixed data without partial functions
val b = sc . parallelize (1 to 8)
b . filter ( _ < 4) . collect
res15 : Array [ Int ] = Array (1 , 2 , 3)
18
val a = sc . parallelize ( List (" cat " , " horse " , 4.0 , 3.5 , 2 , " dog ") )
a . filter ( _ < 4) . collect
< console >:15: error : value < is not a member of Any
This fails because some components of a are not implicitly comparable against integers.
Collect uses the isDefinedAt property of a function-object to determine whether the testfunction is compatible with each data item. Only data items that pass this test (=filter)
are then mapped using the function-object.
Listing 3.43: Examples for mixed data with partial functions
val a = sc . parallelize ( List (" cat " , " horse " , 4.0 , 3.5 , 2 , " dog ") )
a . collect ({ case a : Int
= > " is integer " |
case b : String = > " is string " }) . collect
res17 : Array [ String ] = Array ( is string , is string , is integer , is
string )
val myfunc : PartialFunction [ Any , Any ] = {
case a : Int
= > " is integer " |
case b : String = > " is string " }
myfunc . isDefinedAt ("")
res21 : Boolean = true
myfunc . isDefinedAt (1)
res22 : Boolean = true
myfunc . isDefinedAt (1.5)
res23 : Boolean = false
Be careful! The above code works because it only checks the type itself! If you use
operations on this type, you have to explicitly declare what type you want instead of any.
Otherwise the compiler does (apparently) not know what bytecode it should produce:
Listing 3.44: Examples
val myfunc2 : PartialFunction [ Any , Any ] = { case x if ( x < 4) = > " x "}
< console >:10: error : value < is not a member of Any
val myfunc2 : PartialFunction [ Int , Any ] = { case x if ( x < 4) = > " x "}
myfunc2 : PartialFunction [ Int , Any ] = < function1 >
3.23 filterWith
This is an extended version of filter. It takes two function arguments. The first argument
must conform to Int T and is executed once per partition. It will transform the
partition index to type T . The second function looks like (U, T ) Boolean. T is
the transformed partition index and U are the data items from the RDD. Finally the
function has to return either true or false (i.e. Apply the filter).
Listing 3.45: Variants
19
b == "2") . collect
3.24 flatMap
Similar to map, but allows emitting more than one item in the map function.
Listing 3.47: Variants
def flatMap [ U : ClassTag ]( f : T = > TraversableOnce [ U ]) : RDD [ U ]
3.25 flatMapValues[Pair]
Very similar to mapValues, but collapses the inherent structure of the values during
mapping.
20
3.26 flatMapWith
Similar to flatMap, but allows accessing the partition index or a derivative of the partition
index from within the flatMap-function.
Listing 3.51: Variants
def flatMapWith [ A : ClassTag , U : ClassTag ]( constructA : Int = > A ,
pre serv esPar titi onin g : Boolean = false ) ( f : (T , A ) = > Seq [ U ]) : RDD [ U
]
3.27 fold
Aggregates the values of each partition. The aggregation variable within each partition
is initialized with zeroValue.
Listing 3.53: Variants
def fold ( zeroValue : T ) ( op : (T , T ) = > T ) : T
21
3.28 foldByKey[Pair]
Very similar to fold, but performs the folding separately for each key of the RDD. This
function is only available if the RDD consists of two-component tuples.
Listing 3.55: Variants
def foldByKey ( zeroValue : V ) ( func : (V , V ) = > V ) : RDD [( K , V ) ]
def foldByKey ( zeroValue : V , numPartitions : Int ) ( func : (V , V ) = > V ) : RDD
[( K , V ) ]
def foldByKey ( zeroValue : V , partitioner : Partitioner ) ( func : (V , V ) = > V
) : RDD [( K , V ) ]
3.29 foreach
Executes an parameterless function for each data item.
Listing 3.57: Variants
def foreach ( f : T = > Unit )
3.30 foreachPartition
Executes an parameterless function for each partition. Access to the data items contained
in the partition is provided via the iterator argument.
22
3.31 foreachWith
Similar to foreach, but allows accessing the partition index or a derivative of the partition
index from within the function.
Listing 3.61: Variants
def foreachWith [ A : ClassTag ]( constructA : Int = > A ) ( f : (T , A ) = > Unit )
3.33 getCheckpointFile
Returns the path to the checkpoint file or null if RDD has not yet been checkpointed.
Listing 3.64: Variants
def getCheckpointFile : Option [ String ]
23
3.34 preferredLocations
Returns the hosts which are preferred by this RDD. The actual preference of a specific
host depends on various assumptions.
Listing 3.66: Variants
final def preferredLocations ( split : Partition ) : Seq [ String ]
3.35 getStorageLevel
Retrieves the currently set storage level of the RDD. This can only be used to assign a
new storage level if the RDD does not have a storage level set yet. The example below
shows the error you will get, when you try to reassign the storage level.
Listing 3.67: Variants
def getStorageLevel
24
3.36 glom
Assembles an array that contains all elements of the partition and embeds it in an RDD.
Listing 3.69: Variants
def glom () : RDD [ Array [ T ]]
3.37 groupBy
Listing 3.71: Variants
def groupBy [ K : ClassTag ]( f : T = > K ) : RDD [( K , Seq [ T ]) ]
def groupBy [ K : ClassTag ]( f : T = > K , numPartitions : Int ) : RDD [( K , Seq [ T
]) ]
def groupBy [ K : ClassTag ]( f : T = > K , p : Partitioner ) : RDD [( K , Seq [ T ]) ]
25
3.38 groupByKey[Pair]
Very similar to groupBy, but instead of supplying a function, the key-component of each
pair will automatically be presented to the partitioner.
Listing 3.73: Variants
def groupByKey () : RDD [( K , Seq [ V ]) ]
def groupByKey ( numPartitions : Int ) : RDD [( K , Seq [ V ]) ]
def groupByKey ( partitioner : Partitioner ) : RDD [( K , Seq [ V ]) ]
26
3.39 histogram[Double]
These functions take an RDD of doubles and create a histogram with either even spacing
(the number of buckets equals to bucketCount) or arbitrary spacing based on custom
bucket boundaries supplied by the user via an array of double values. The result type of
both variants is slightly different, the first function will return a tuple consisting of two
arrays. The first array contains the computed bucket boundary values and the second
array contains the corresponding count of values (i.e. the histogram). The second variant
of the function will just return the histogram as an array of integers.
Listing 3.75: Variants
def histogram ( bucketCount : Int ) : Pair [ Array [ Double ] , Array [ Long ]]
def histogram ( buckets : Array [ Double ] , evenBuckets : Boolean = false ) :
Array [ Long ]
3.40 id
Retrieves the ID which has been assigned to the RDD by its device context.
Listing 3.78: Variants
val id : Int
27
3.41 isCheckpointed
Indicates whether the RDD has been checkpointed. The flag will only raise once the
checkpoint has really been created.
Listing 3.80: Variants
def isCheckpointed : Boolean
3.42 iterator
Returns a compatible iterator object for a partition of this RDD. This function should
never be called directly.
Listing 3.82: Variants
final def iterator ( split : Partition , context : TaskContext ) : Iterator [ T ]
3.43 join[Pair]
Performs an inner join using two key-value RDDs. Please note that the keys must be
generally comparable to make this work.
Listing 3.83: Variants
def join [ W ]( other : RDD [( K , W ) ]) : RDD [( K , (V , W ) ) ]
def join [ W ]( other : RDD [( K , W ) ] , numPartitions : Int ) : RDD [( K , (V , W ) ) ]
def join [ W ]( other : RDD [( K , W ) ] , partitioner : Partitioner ) : RDD [( K , (V ,
W))]
28
3.44 keyBy
Constructs two-component tuples (key-value pairs) by applying a function on each data
item. The result of the function becomes the key and the original data item becomes
the value of the newly created tuples.
Listing 3.85: Variants
def keyBy [ K ]( f : T = > K ) : RDD [( K , T ) ]
3.45 keys[Pair]
Extracts the keys from all contained tuples and returns them in a new RDD.
Listing 3.87: Variants
def keys : RDD [ K ]
29
val a = sc . parallelize ( List (" dog " , " tiger " , " lion " , " cat " , " panther " , "
eagle ") , 2)
val b = a . map ( x = > ( x . length , x ) )
b . keys . collect
res2 : Array [ Int ] = Array (3 , 5 , 4 , 3 , 7 , 5)
3.46 leftOuterJoin[Pair]
Performs an left outer join using two key-value RDDs. Please note that the keys must
be generally comparable to make this work correctly.
Listing 3.89: Variants
def leftOuterJoin [ W ]( other : RDD [( K , W ) ]) : RDD [( K , (V , Option [ W ]) ) ]
def leftOuterJoin [ W ]( other : RDD [( K , W ) ] , numPartitions : Int ) : RDD [( K , (
V , Option [ W ]) ) ]
def leftOuterJoin [ W ]( other : RDD [( K , W ) ] , partitioner : Partitioner ) : RDD
[( K , (V , Option [ W ]) ) ]
3.47 lookup[Pair]
Scans the RDD for all keys that match the provided value and returns their values as a
Scala sequence.
Listing 3.91: Variants
def lookup ( key : K ) : Seq [ V ]
30
3.48 map
Applies a transformation function on each item of the RDD and returns the result as a
new RDD.
Listing 3.93: Variants
def map [ U : ClassTag ]( f : T = > U ) : RDD [ U ]
3.49 mapPartitions
This is a specialized map that is called only once for each partition. The entire content
of the respective partitions is available as a sequential stream of values via the input
argument (Iterarator[T] ). The custom function must return yet another Iterator[U]. The
combined result iterators are automatically converted into a new RDD. Please note, that
the tuples (3,4) and (6,7) are missing from the following result due to the partitioning
we chose.
Listing 3.95: Variants
def mapPartitions [ U : ClassTag ]( f : Iterator [ T ] = > Iterator [ U ] ,
pre serv esPar titi onin g : Boolean = false ) : RDD [ U ]
31
}
a . mapPartitions ( myfunc ) . collect
res0 : Array [( Int , Int ) ] = Array ((2 ,3) , (1 ,2) , (5 ,6) , (4 ,5) , (8 ,9) ,
(7 ,8) )
3.50 mapPartitionsWithContext
Similar to mapPartitions, but allows accessing information about the processing state
within the mapper.
Listing 3.99: Variants
def m a pP a r t it i o n sW i t h Co n t e xt [ U : ClassTag ]( f : ( TaskContext , Iterator [ T ])
= > Iterator [ U ] , pres erves Part itio ning : Boolean = false ) : RDD [ U ]
32
3.51 mapPartitionsWithIndex
Similar to mapPartitions, but takes two parameters. The first parameter is the index of
the partition and the second is an iterator through all the items within this partition. The
output is an iterator containing the list of items after applying whatever transformation
the function encodes.
Listing 3.101: Variants
def ma pPa rt it ion sW it hIn de x [ U : ClassTag ]( f : ( Int , Iterator [ T ]) = >
Iterator [ U ] , pre serve sPar titio ning : Boolean = false ) : RDD [ U ]
val x = sc . parallelize ( List (1 ,2 ,3 ,4 ,5 ,6 ,7 ,8 ,9 ,10) , 3)
def myfunc ( index : Int , iter : Iterator [ Int ]) : Iterator [ String ] = {
iter . toList . map ( x = > index + " ," + x ) . iterator
}
x . m a pP ar tit io ns Wit hI nd ex ( myfunc ) . collect ()
res10 : Array [ String ] = Array (0 ,1 , 0 ,2 , 0 ,3 , 1 ,4 , 1 ,5 , 1 ,6 , 2 ,7 , 2 ,8 ,
2 ,9 , 2 ,10)
3.52 mapPartitionsWithSplit
This method has been marked as deprecated in the API. So, you should not use this
method anymore. Deprecated methods will not be covered in this document.
33
3.53 mapValues[Pair]
Takes the values of a RDD that consists of two-component tuples, and applies the provided function to transform each value. Then, it forms new two-component tuples using
the key and the transformed value and stores them in a new RDD.
Listing 3.103: Variants
def mapValues [ U ]( f : V = > U ) : RDD [( K , U ) ]
3.54 mapWith
This is an extended version of map. It takes two function arguments. The first argument
must conform to Int T and is executed once per partition. It will map the partition
index to some transformed partition index of type T . The second function must conform
to (U, T ) U . T is the transformed partition index and U is a data item of the RDD.
Finally the function has to return a transformed data item of type U .
Listing 3.105: Variants
def mapWith [ A : ClassTag , U : ClassTag ]( constructA : Int = > A ,
pre serv esPar titi onin g : Boolean = false ) ( f : (T , A ) = > U ) : RDD [ U ]
34
res0 : Array [( String , String ) ] = Array (( Value :1 , Index :0) , ( Value :2 , Index
:0) , ( Value :3 , Index :0) , ( Value :4 , Index :1) , ( Value :5 , Index :1) , (
Value :6 , Index :1) , ( Value :7 , Index :2) , ( Value :8 , Index :2) , ( Value :9 ,
Index )
3.57 partitionBy[Pair]
Repartitions as key-value RDD using its keys. The partitioner implementation can be
supplied as the first argument.
Listing 3.111: Variants
def partitionBy ( partitioner : Partitioner ) : RDD [( K , V ) ]
35
3.58 partitioner
Specifies a function pointer to the default partitioner that will be used for groupBy,
subtract, reduceByKey (from PairedRDDFunctions), etc. functions.
Listing 3.112: Variants
@transient val partitioner : Option [ Partitioner ]
3.59 partitions
Returns an array of the partition objects associated with this RDD.
Listing 3.113: Variants
final def partitions : Array [ Partition ]
36
3.61 pipe
Takes the RDD data of each partition and sends it via stdin to a shell-command. The
resulting output of the command is captured and returned as a RDD of string values.
Listing 3.117: Variants
def pipe ( command : String ) : RDD [ String ]
def pipe ( command : String , env : Map [ String , String ]) : RDD [ String ]
def pipe ( command : Seq [ String ] , env : Map [ String , String ] = Map () ,
printPipeContext : ( String = > Unit ) = > Unit = null , printRDDElement :
(T , String = > Unit ) = > Unit = null ) : RDD [ String ]
3.62 reduce
This function provides the well-known reduce functionality in Spark. Please note that
any function f you provide, should be commutative in order to generate reproducible
results.
Listing 3.119: Variants
def reduce ( f : (T , T ) = > T ) : T
37
3.64 rightOuterJoin[Pair]
Performs an right outer join using two key-value RDDs. Please note that the keys must
be generally comparable to make this work correctly.
Listing 3.123: Variants
def rightOuterJoin [ W ]( other : RDD [( K , W ) ]) : RDD [( K , ( Option [ V ] , W ) ) ]
def rightOuterJoin [ W ]( other : RDD [( K , W ) ] , numPartitions : Int ) : RDD [( K ,
( Option [ V ] , W ) ) ]
def rightOuterJoin [ W ]( other : RDD [( K , W ) ] , partitioner : Partitioner ) :
RDD [( K , ( Option [ V ] , W ) ) ]
3.65 sample
Randomly selects a fraction of the items of a RDD and returns them in a new RDD.
Listing 3.125: Variants
def sample ( withReplacement : Boolean , fraction : Double , seed : Int ) : RDD [
T]
38
3.67 saveAsObjectFile
Saves the RDD in binary format.
Listing 3.128: Variants
def saveAsObjectFile ( path : String )
39
3.68 saveAsSequenceFile[SeqFile]
Saves the RDD as a Hadoop sequence file.
Listing 3.130: Variants
def saveAsSequenceFile ( path : String , codec : Option [ Class [ _ <:
CompressionCodec ]] = None )
3.69 saveAsTextFile
Saves the RDD as text files. One line at a time.
Listing 3.132: Variants
def saveAsTextFile ( path : String )
def saveAsTextFile ( path : String , codec : Class [ _ <: CompressionCodec ])
40
[ cl oudera@localhost ~] $ head -n 5 ~/ Documents / spark -0.9.0 - incubating bin - cdh4 / bin / mydata_a / part -00000
1
2
3
4
5
// Produces 3 output files since we have created the a RDD with 3
partitions
[ cl oudera@localhost ~] $ ll ~/ Documents / spark -0.9.0 - incubating - bin - cdh4 /
bin / mydata_a /
- rwxr - xr - x 1 cloudera cloudera 15558 Apr 3 21:11 part -00000
- rwxr - xr - x 1 cloudera cloudera 16665 Apr 3 21:11 part -00001
- rwxr - xr - x 1 cloudera cloudera 16671 Apr 3 21:11 part -00002
3.70 stats[Double]
Simultaneously computes the mean, variance and the standard deviation of all values in
the RDD.
Listing 3.136: Variants
def stats () : StatCounter
41
val x = sc . parallelize ( List (1.0 , 2.0 , 3.0 , 5.0 , 20.0 , 19.02 , 19.29 ,
11.09 , 21.0) , 2)
x . stats
res16 : org . apache . spark . util . StatCounter = ( count : 9 , mean : 11.266667 ,
stdev : 8.126859)
3.71 sortByKey[Ordered]
This function sorts the input RDDs data and stores it in a new RDD. The output RDD
is a shuffled RDD because it stores data that is output by a reducer which has been
shuffled. The implementation of this function is actually very clever. First, it uses a
range partitioner to partition the data in ranges within the shuffled RDD. Then it sorts
these ranges individually with mapPartitions using standard sort mechanisms.
Listing 3.138: Variants
def sortByKey ( ascending : Boolean = true , numPartitions : Int = self .
partitions . size ) : RDD [ P ]
42
(3.1)
(3.2)
3.73 subtract
Performs the well known standard set subtraction operation: A \ B
Listing 3.142: Variants
def subtract ( other : RDD [ T ]) : RDD [ T ]
def subtract ( other : RDD [ T ] , numPartitions : Int ) : RDD [ T ]
def subtract ( other : RDD [ T ] , p : Partitioner ) : RDD [ T ]
3.74 subtractByKey[Pair]
Very similar to subtract, but instead of supplying a function, the key-component of each
pair will be automatically used as criterion for removing items from the first RDD.
Listing 3.144: Variants
def subtractByKey [ W : ClassTag ]( other : RDD [( K , W ) ]) : RDD [( K , V ) ]
43
3.76 take
Extracts the first n items of the RDD and returns them as an array. (Note: This sounds
very easy, but it is actually quite a tricky problem for the implementors of Spark because
the items in question can be in many different partitions.)
Listing 3.148: Variants
def take ( num : Int ) : Array [ T ]
44
3.77 takeOrdered
Orders the data items of the RDD using their inherent implicit ordering function and
returns the first n items as an array.
Listing 3.150: Variants
def takeOrdered ( num : Int ) ( implicit ord : Ordering [ T ]) : Array [ T ]
3.78 takeSample
Behaves different from sample in the following respects:
It will return an exact number of samples (Hint: 2nd parameter).
It returns an Array instead of RDD.
It internally randomizes the order of the items returned.
Listing 3.152: Variants
def takeSample ( withReplacement : Boolean , num : Int , seed : Int ) : Array [ T ]
45
3.79 toDebugString
Returns a string that contains debug information about the RDD and its dependencies.
Listing 3.154: Variants
def toDebugString : String
3.80 toJavaRDD
Embeds this RDD object within a JavaRDD object and returns it.
Listing 3.156: Variants
def toJavaRDD () : JavaRDD [ T ]
3.81 top
Utilizes the implicit ordering of T to determine the top k values and returns them as an
array.
Listing 3.158: Variants
def top ( num : Int ) ( implicit ord : Ordering [ T ]) : Array [ T ]
46
3.82 toString
Assembles a human-readable textual description of the RDD.
Listing 3.160: Variants
override def toString : String
3.83 union, ++
Performs the standard set operation: A B
Listing 3.162: Variants
def ++( other : RDD [ T ]) : RDD [ T ]
def union ( other : RDD [ T ]) : RDD [ T ]
= sc . parallelize (1 to 3 , 1)
= sc . parallelize (5 to 7 , 1)
b ) . collect
Array [ Int ] = Array (1 , 2 , 3 , 5 , 6 , 7)
3.84 unpersist
Dematerializes the RDD (i.e. Erases all data items from hard-disk and memory). However, the RDD object remains. If it is referenced in a computation, Spark will regenerate
it automatically using the stored dependency graph.
Listing 3.164: Variants
def unpersist ( blocking : Boolean = true ) : RDD [ T ]
47
3.85 values[Pair]
Extracts the values from all contained tuples and returns them in a new RDD.
Listing 3.166: Variants
def values : RDD [ V ]
3.87 zip
Joins two RDDs by combining the i-th of either partition with each other. The resulting
RDD will consist of two-component tuples which are interpreted as key-value pairs by
the methods provided by the PairRDDFunctions extension.
Listing 3.170: Variants
def zip [ U : ClassTag ]( other : RDD [ U ]) : RDD [( T , U ) ]
48
3.88 zipParititions
Similar to zip. But provides more control over the zipping process.
Listing 3.172: Variants
def zipPartitions [ B : ClassTag , V : ClassTag ]( rdd2 : RDD [ B ]) ( f : ( Iterator [
T ] , Iterator [ B ]) = > Iterator [ V ]) : RDD [ V ]
def zipPartitions [ B : ClassTag , V : ClassTag ]( rdd2 : RDD [ B ] ,
pre serv esPar titi onin g : Boolean ) ( f : ( Iterator [ T ] , Iterator [ B ]) = >
Iterator [ V ]) : RDD [ V ]
def zipPartitions [ B : ClassTag , C : ClassTag , V : ClassTag ]( rdd2 : RDD [ B ] ,
rdd3 : RDD [ C ]) ( f : ( Iterator [ T ] , Iterator [ B ] , Iterator [ C ]) = >
Iterator [ V ]) : RDD [ V ]
49
a = sc . parallelize (0 to 9 , 3)
b = sc . parallelize (10 to 19 , 3)
c = sc . parallelize (100 to 109 , 3)
myfunc ( aiter : Iterator [ Int ] , biter : Iterator [ Int ] , citer : Iterator [
Int ]) : Iterator [ String ] =
{
var res = List [ String ]()
while ( aiter . hasNext && biter . hasNext && citer . hasNext )
{
val x = aiter . next + " " + biter . next + " " + citer . next
res ::= x
}
res . iterator
}
a . zipPartitions (b , c ) ( myfunc ) . collect
res50 : Array [ String ] = Array (2 12 102 , 1 11 101 , 0 10 100 , 5 15 105 , 4
14 104 , 3 13 103 , 9 19 109 , 8 18 108 , 7 17 107 , 6 16 106)
50
4 Further Topics
4.1 Reading from HDFS
This requires you to upload your data first into HDFS using whatever method you prefer.
Listing 4.1: Examples
val sp = sc . textFile (" hdfs :// localhost :8020/ user / cloudera / sp_data ")
sp . toDebugString
res25 : String =
MappedRDD [32] at textFile at < console >:12 (1 partitions )
HadoopRDD [31] at textFile at < console >:12 (1 partition
sp . collect
res24 : Array [ String ] = Array ( A MIDSUMMER - NIGHT S DREAM , "" , " Now , fair
Hippolyta , our nuptial hour " , " Draws on apace : four happy days
bring in " , " Another moon ; but O ! methinks how slow " , This old
moon wanes ; she lingers my desires ,, " Like to a step dame , or a
dowager " , Long withering out a young man s revenue . , "" , Four
days will quickly steep themselves in night ; , Four nights will
quickly dream away the time ; , " And then the moon , like to a
silver bow " , " New - bent in heaven , shall behold the night " , Of
our solemnities . , "" , Go , Philostrate ,, Stir up the Athenian
youth to merriments ; , Awake the pert and nimble spirit of mirth ; ,
Turn melancholy forth to funerals ; , The pale companion is not for
our pomp . , "" , Hippolyta , I woo d thee with my sword ,, And won
thy lo ...
51