Spark For Python Developers - Sample Chapter
Spark For Python Developers - Sample Chapter
pl
C o m m u n i t y
E x p e r i e n c e
D i s t i l l e d
Amit Nandi
P U B L I S H I N G
Sa
m
$ 39.99 US
25.99 UK
ee
Amit Nandi
Preface
Spark for Python Developers aims to combine the elegance and flexibility of Python
with the power and versatility of Apache Spark. Spark is written in Scala and runs
on the Java virtual machine. It is nevertheless polyglot and offers bindings and APIs
for Java, Scala, Python, and R. Python is a well-designed language with an extensive
set of specialized libraries. This book looks at PySpark within the PyData ecosystem.
Some of the prominent PyData libraries include Pandas, Blaze, Scikit-Learn,
Matplotlib, Seaborn, and Bokeh. These libraries are open source. They are developed,
used, and maintained by the data scientist and Python developers community.
PySpark integrates well with the PyData ecosystem, as endorsed by the Anaconda
Python distribution. The book puts forward a journey to build data-intensive apps
along with an architectural blueprint that covers the following steps: first, set up the
base infrastructure with Spark. Second, acquire, collect, process, and store the data.
Third, gain insights from the collected data. Fourth, stream live data and process it in
real time. Finally, visualize the information.
The objective of the book is to learn about PySpark and PyData libraries by building
apps that analyze the Spark community's interactions on social networks. The focus
is on Twitter data.
Preface
Chapter 2, Building Batch and Streaming Apps with Spark, lays the foundation of the
Data Intensive Apps Architecture. It describes the five layers of the apps architecture
blueprint: infrastructure, persistence, integration, analytics, and engagement. We
establish API connections with three social networks: Twitter, GitHub, and Meetup.
This chapter provides the tools to connect to these three nontrivial APIs so that you
can create your own data mashups at a later stage.
Chapter 3, Juggling Data with Spark, covers how to harvest data from Twitter and
process it using Pandas, Blaze, and SparkSQL with their respective implementations
of the dataframe data structure. We proceed with further investigations and
techniques using Spark SQL, leveraging on the Spark dataframe data structure.
Chapter 4, Learning from Data Using Spark, gives an overview of the ever expanding
library of algorithms of Spark MLlib. It covers supervised and unsupervised
learning, recommender systems, optimization, and feature extraction algorithms.
We put the Twitter harvested dataset through a Python Scikit-Learn and Spark
MLlib K-means clustering in order to segregate the Apache Spark relevant tweets.
Chapter 5, Streaming Live Data with Spark, lays down the foundation of streaming
architecture apps and describes their challenges, constraints, and benefits. We
illustrate the streaming concepts with TCP sockets, followed by live tweet ingestion
and processing directly from the Twitter firehose. We also describe Flume, a reliable,
flexible, and scalable data ingestion and transport pipeline system. The combination
of Flume, Kafka, and Spark delivers unparalleled robustness, speed, and agility in an
ever-changing landscape. We end the chapter with some remarks and observations
on two streaming architectural paradigms, the Lambda and Kappa architectures.
Chapter 6, Visualizing Insights and Trends, focuses on a few key visualization
techniques. It covers how to build word clouds and expose their intuitive power
to reveal a lot of the key words, moods, and memes carried through thousands of
tweets. We then focus on interactive mapping visualizations using Bokeh. We build
a world map from the ground up and create a scatter plot of critical tweets. Our final
visualization is to overlay an actual Google map of London, highlighting upcoming
meetups and their respective topics.
Installing and enabling Spark, and the PyData libraries such as Pandas,
Scikit- Learn, Blaze, Matplotlib, and Bokeh.
The last decade has seen the rise and dominance of data-driven behemoths such as
Amazon, Google, Twitter, LinkedIn, and Facebook. These corporations, by seeding,
sharing, or disclosing their infrastructure concepts, software practices, and data
processing frameworks, have fostered a vibrant open source software community.
This has transformed the enterprise technology, systems, and software architecture.
This includes new infrastructure and DevOps (short for development and
operations), concepts leveraging virtualization, cloud technology, and
software-defined networks.
[1]
To process petabytes of data, Hadoop was developed and open sourced, taking
its inspiration from the Google File System (GFS) and the adjoining distributed
computing framework, MapReduce. Overcoming the complexities of scaling while
keeping costs under control has also led to a proliferation of new data stores.
Examples of recent database technology include Cassandra, a columnar
database; MongoDB, a document database; and Neo4J, a graph database.
Hadoop, thanks to its ability to process huge datasets, has fostered a vast ecosystem
to query data more iteratively and interactively with Pig, Hive, Impala, and Tez.
Hadoop is cumbersome as it operates only in batch mode using MapReduce. Spark
is creating a revolution in the analytics and data processing realm by targeting the
shortcomings of disk input-output and bandwidth-intensive MapReduce jobs.
Spark is written in Scala, and therefore integrates natively with the Java Virtual
Machine (JVM) powered ecosystem. Spark had early on provided Python API and
bindings by enabling PySpark. The Spark architecture and ecosystem is inherently
polyglot, with an obvious strong presence of Java-led systems.
This book will focus on PySpark and the PyData ecosystem. Python is one of the
preferred languages in the academic and scientific community for data-intensive
processing. Python has developed a rich ecosystem of libraries and tools in data
manipulation with Pandas and Blaze, in Machine Learning with Scikit-Learn, and in
data visualization with Matplotlib, Seaborn, and Bokeh. Hence, the aim of this book
is to build an end-to-end architecture for data-intensive applications powered by
Spark and Python. In order to put these concepts in to practice, we will analyze social
networks such as Twitter, GitHub, and Meetup. We will focus on the activities and
social interactions of Spark and the Open Source Software community by tapping
into GitHub, Twitter, and Meetup.
Building data-intensive applications requires highly scalable infrastructure, polyglot
storage, seamless data integration, multiparadigm analytics processing, and efficient
visualization. The following paragraph describes the data-intensive app architecture
blueprint that we will adopt throughout the book. It is the backbone of the book.
We will discover Spark in the context of the broader PyData ecosystem.
Downloading the example code
You can download the example code files for all Packt books you have
purchased from your account at http://www.packtpub.com. If you
purchased this book elsewhere, you can visit http://www.packtpub.
com/support and register to have the files e-mailed directly to you.
[2]
Chapter 1
Infrastructure layer
Persistence layer
Integration layer
Analytics layer
Engagement layer
The following screenshot depicts the five layers of the Data Intensive
App Framework:
From the bottom up, let's go through the layers and their main purpose.
[3]
Infrastructure layer
The infrastructure layer is primarily concerned with virtualization, scalability,
and continuous integration. In practical terms, and in terms of virtualization, we
will go through building our own development environment in a VirtualBox and
virtual machine powered by Spark and the Anaconda distribution of Python. If
we wish to scale from there, we can create a similar environment in the cloud. The
practice of creating a segregated development environment and moving into test
and production deployment can be automated and can be part of a continuous
integration cycle powered by DevOps tools such as Vagrant, Chef, Puppet, and
Docker. Docker is a very popular open source project that eases the installation and
deployment of new environments. The book will be limited to building the virtual
machine using VirtualBox. From a data-intensive app architecture point of view, we
are describing the essential steps of the infrastructure layer by mentioning scalability
and continuous integration beyond just virtualization.
Persistence layer
The persistence layer manages the various repositories in accordance with data needs
and shapes. It ensures the set up and management of the polyglot data stores. It
includes relational database management systems such as MySQL and PostgreSQL;
key-value data stores such as Hadoop, Riak, and Redis; columnar databases such as
HBase and Cassandra; document databases such as MongoDB and Couchbase; and
graph databases such as Neo4j. The persistence layer manages various filesystems
such as Hadoop's HDFS. It interacts with various storage systems from native hard
drives to Amazon S3. It manages various file storage formats such as csv, json, and
parquet, which is a column-oriented format.
Integration layer
The integration layer focuses on data acquisition, transformation, quality,
persistence, consumption, and governance. It is essentially driven by the
following five Cs: connect, collect, correct, compose, and consume.
The five steps describe the lifecycle of data. They are focused on how to acquire the
dataset of interest, explore it, iteratively refine and enrich the collected information,
and get it ready for consumption. So, the steps perform the following operations:
Connect: Targets the best way to acquire data from the various data sources,
APIs offered by these sources, the input format, input schemas if they exist,
the rate of data collection, and limitations from providers
[4]
Chapter 1
Collect: Looks at which data to store where and in what format, to ease data
composition and consumption at later stages
Compose: Concentrates its attention on how to mash up the various data sets
collected, and enrich the information in order to build a compelling datadriven product
Consume: Takes care of data provisioning and rendering and how the right
data reaches the right individual at the right time
Control: This sixth additional step will sooner or later be required as the
data, the organization, and the participants grow and it is about ensuring
data governance
The following diagram depicts the iterative process of data acquisition and
refinement for consumption:
Analytics layer
The analytics layer is where Spark processes data with the various models,
algorithms, and machine learning pipelines in order to derive insights. For our
purpose, in this book, the analytics layer is powered by Spark. We will delve
deeper in subsequent chapters into the merits of Spark. In a nutshell, what makes
it so powerful is that it allows multiple paradigms of analytics processing in a
single unified platform. It allows batch, streaming, and interactive analytics. Batch
processing on large datasets with longer latency periods allows us to extract patterns
and insights that can feed into real-time events in streaming mode. Interactive and
iterative analytics are more suited for data exploration. Spark offers bindings and
APIs in Python and R. With its SparkSQL module and the Spark Dataframe, it offers
a very familiar analytics interface.
[5]
Engagement layer
The engagement layer interacts with the end user and provides dashboards,
interactive visualizations, and alerts. We will focus here on the tools provided by
the PyData ecosystem such as Matplotlib, Seaborn, and Bokeh.
Understanding Spark
Hadoop scales horizontally as the data grows. Hadoop runs on commodity
hardware, so it is cost-effective. Intensive data applications are enabled by scalable,
distributed processing frameworks that allow organizations to analyze petabytes of
data on large commodity clusters. Hadoop is the first open source implementation
of map-reduce. Hadoop relies on a distributed framework for storage called HDFS
(Hadoop Distributed File System). Hadoop runs map-reduce tasks in batch jobs.
Hadoop requires persisting the data to disk at each map, shuffle, and reduce
process step. The overhead and the latency of such batch jobs adversely impact
the performance.
Spark is a fast, distributed general analytics computing engine for large-scale data
processing. The major breakthrough from Hadoop is that Spark allows data sharing
between processing steps through in-memory processing of data pipelines.
Spark is unique in that it allows four different styles of data analysis and processing.
Spark can be used in:
Interactive: This mode is used for data exploration as large chunks of data
are in memory and due to the very quick response time of Spark
[6]
Chapter 1
Spark operates in three modes: one single mode, standalone on a single machine and
two distributed modes on a cluster of machineson Yarn, the Hadoop distributed
resource manager, or on Mesos, the open source cluster manager developed at
Berkeley concurrently with Spark:
Spark libraries
Spark comes with batteries included, with some powerful libraries:
Spark Streaming: This is for near real-time analysis of data using micro
batches and sliding widows on incoming streams of data
PySpark in action
Spark is written in Scala. The whole Spark ecosystem naturally leverages the JVM
environment and capitalizes on HDFS natively. Hadoop HDFS is one of the many
data stores supported by Spark. Spark is agnostic and from the beginning interacted
with multiple data sources, types, and formats.
PySpark is not a transcribed version of Spark on a Java-enabled dialect of Python
such as Jython. PySpark provides integrated API bindings around Spark and enables
full usage of the Python ecosystem within all the nodes of the cluster with the pickle
Python serialization and, more importantly, supplies access to the rich ecosystem of
Python's machine learning libraries such as Scikit-Learn or data processing such
as Pandas.
[7]
When we initialize a Spark program, the first thing a Spark program must do is to
create a SparkContext object. It tells Spark how to access the cluster. The Python
program creates a PySparkContext. Py4J is the gateway that binds the Python
program to the Spark JVM SparkContext. The JVM SparkContextserializes
the application codes and the closures and sends them to the cluster for execution.
The cluster manager allocates resources and schedules, and ships the closures to
the Spark workers in the cluster who activate Python virtual machines as required.
In each machine, the Spark Worker is managed by an executor that controls
computation, storage, and cache.
Here's an example of how the Spark driver manages both the PySpark context and
the Spark context with its local filesystems and its interactions with the Spark worker
through the cluster manager:
Chapter 1
RDDs are created from a data source such as an HDFS file or a DB query.
There are three ways to create an RDD:
RDDs are transformed with functions such as map or filter, which yield
new RDDs.
An action such as first, take, collect, or count on an RDD will deliver the
results into the Spark driver. The Spark driver is the client through which
the user interacts with the Spark cluster.
[9]
Understanding Anaconda
Anaconda is a widely used free Python distribution maintained by Continuum
(https://www.continuum.io/). We will use the prevailing software stack provided
by Anaconda to generate our apps. In this book, we will use PySpark and the
PyData ecosystem. The PyData ecosystem is promoted, supported, and maintained
by Continuum and powered by the Anaconda Python distribution. The Anaconda
Python distribution essentially saves time and aggravation in the installation of
the Python environment; we will use it in conjunction with Spark. Anaconda has
its own package management that supplements the traditional pip install and
easy-install. Anaconda comes with batteries included, namely some of the most
important packages such as Pandas, Scikit-Learn, Blaze, Matplotlib, and Bokeh. An
upgrade to any of the installed library is a simple command at the console:
$ conda update
[ 10 ]
Chapter 1
[ 11 ]
Each storage backend serves a specific purpose depending on the nature of the
data to be handled. The MySQL RDBMs is used for standard tabular processed
information that can be easily queried using SQL. As we will be processing a lot of
JSON-type data from various APIs, the easiest way to store them is in a document.
For real-time and time-series-related information, Cassandra is best suited as a
columnar database.
The following diagram gives a view of the environment we will build and use
throughout the book:
[ 12 ]
Chapter 1
[ 13 ]
3. After accepting the license terms, you will be asked to specify the install
location (which defaults to ~/anaconda).
4. After the self-extraction is finished, you should add the anaconda binary
directory to your PATH environment variable:
# add anaconda to PATH
bash Anaconda-2.x.x-Linux-x86[_64].sh
Installing Java 8
Spark runs on the JVM and requires the Java SDK (short for Software Development
Kit) and not the JRE (short for Java Runtime Environment), as we will build apps
with Spark. The recommended version is Java Version 7 or higher. Java 8 is the most
suitable, as it includes many of the functional programming techniques available
with Scala and Python.
To install Java 8, follow these steps:
1. Install Oracle Java 8 using the following commands:
# install oracle java 8
$ sudo apt-get install software-properties-common
$ sudo add-apt-repository ppa:webupd8team/java
$ sudo apt-get update
$ sudo apt-get install oracle-java8-installer
2. Set the JAVA_HOME environment variable and ensure that the Java program is
on your PATH.
3. Check that JAVA_HOME is properly installed:
#
$ echo JAVA_HOME
[ 14 ]
Chapter 1
Installing Spark
Head over to the Spark download page at http://spark.apache.org/downloads.
html.
The Spark download page offers the possibility to download earlier versions of
Spark and different package and download types. We will select the latest release,
pre-built for Hadoop 2.6 and later. The easiest way to install Spark is to use a Spark
package prebuilt for Hadoop 2.6 and later, rather than build it from source. Move the
file to the directory ~/spark under the root directory.
Download the latest release of SparkSpark 1.5.2, released on November 9, 2015:
1. Select Spark release 1.5.2 (Nov 09 2015),
2. Chose the package type Prebuilt for Hadoop 2.6 and later,
3. Chose the download type Direct Download,
4. Download Spark: spark-1.5.2-bin-hadoop2.6.tgz,
5. Verify this release using the 1.3.0 signatures and checksums,
This can also be accomplished by running:
# download spark
$ wget http://d3kbcqa49mib13.cloudfront.net/spark-1.5.2-bin-hadoop2.6.tgz
[ 15 ]
__
___ _____/ /__
_\ \/ _ \/ _ `/ __/
'_/
version 1.5.2
/_/
Using Python version 2.7.6 (default, Mar 22 2014 22:59:56)
SparkContext available as sc.
>>>
The interpreter will have already provided us with a Spark context object, sc,
which we can see by running:
>>> print(sc)
<pyspark.context.SparkContext object at 0x7f34b61c4e50>
./bin/pyspark
/home/an/spark/spark-1.5.0-bin-hadoop2.6/examples/AN_Spark
[ 16 ]
Chapter 1
In this program, we are first reading the file from the directory /home/an/
Documents/A00_Documents/Spark4Py 20150315 into file_in.
We are then introspecting the file by counting the number of lines and the number of
characters per line.
[ 17 ]
We are splitting the input file in to words and getting them in lower case. For our
word count purpose, we are choosing words longer than three characters in order to
avoid shorter and much more frequent words such as the, and, for to skew the count
in their favor. Generally, they are considered stop words and should be filtered out
in any language processing task.
At this stage, we are getting ready for the MapReduce steps. To each word, we map a
value of 1 and reduce it by summing all the unique words.
Here are illustrations of the code in the IPython Notebook. The first 10 cells
are preprocessing the word count on the dataset, which is retrieved from the
local file directory.
[ 18 ]
Chapter 1
Swap the word count tuples in the format (count, word) in order to sort by count,
which is now the primary key of the tuple:
# create tuple (count, word) and sort in descending
words = words.map(lambda x: (x[1], x[0])).sortByKey(False)
# take top 20 words by frequency
words.take(20)
In order to display our result, we are creating the tuple (count, word) and
displaying the top 20 most frequently used words in descending order:
[ 19 ]
Here, we visualize the most frequent words by plotting them in a bar chart. We have
to first swap the tuple from the original (count, word) to (word, count):
[ 20 ]
Chapter 1
So here you have it: the most frequent words used in the first chapter are Spark,
followed by Data and Anaconda.
[ 21 ]
Introduction to Big Data with Apache Spark, Professor Anthony D. Joseph can
be found at https://www.edx.org/course/introduction-big-dataapache-spark-uc-berkeleyx-cs100-1x
The course labs were executed on IPython Notebooks powered by PySpark. They can
be found in the following GitHub repository: https://github.com/spark-mooc/
mooc-setup/.
Once you have set up Vagrant on your machine, follow these instructions to get
started: https://docs.vagrantup.com/v2/getting-started/index.html.
Clone the spark-mooc/mooc-setup/ github repository in your work directory
and launch the command $ vagrant up, within the cloned directory:
Be aware that the version of Spark may be outdated as the vagrantfile may not be
up-to-date.
You will see an output similar to this:
C:\Programs\spark\edx1001\mooc-setup-master>vagrant up
Bringing machine 'sparkvm' up with 'virtualbox' provider...
==> sparkvm: Checking if box 'sparkmooc/base' is up to date...
==> sparkvm: Clearing any previously set forwarded ports...
==> sparkvm: Clearing any previously set network interfaces...
==> sparkvm: Preparing network interfaces based on configuration...
sparkvm: Adapter 1: nat
==> sparkvm: Forwarding ports...
sparkvm: 8001 => 8001 (adapter 1)
sparkvm: 4040 => 4040 (adapter 1)
sparkvm: 22 => 2222 (adapter 1)
==> sparkvm: Booting VM...
==> sparkvm: Waiting for machine to boot. This may take a few minutes...
sparkvm: SSH address: 127.0.0.1:2222
sparkvm: SSH username: vagrant
sparkvm: SSH auth method: private key
[ 22 ]
Chapter 1
sparkvm: Warning: Connection timeout. Retrying...
sparkvm: Warning: Remote connection disconnect. Retrying...
==> sparkvm: Machine booted and ready!
==> sparkvm: Checking for guest additions in VM...
==> sparkvm: Setting hostname...
==> sparkvm: Mounting shared folders...
sparkvm: /vagrant => C:/Programs/spark/edx1001/mooc-setup-master
==> sparkvm: Machine already provisioned. Run `vagrant provision` or use
the `--provision`
==> sparkvm: to force provisioning. Provisioners marked to run always
will still run.
C:\Programs\spark\edx1001\mooc-setup-master>
[ 23 ]
Chapter 1
Allowing easy sharing of the development environment image with all its
dependencies using The DockerHub. The DockerHub is similar to GitHub.
It allows easy cloning and version control. The snapshot image of the
configured environment can be the baseline for further enhancements.
Docker offers the ability to clone and deploy an environment from the Dockerfile.
You can find an example Dockerfile with a PySpark and Anaconda setup at the
following address: https://hub.docker.com/r/thisgokeboysef/pysparkdocker/~/dockerfile/.
[ 25 ]
Install the docker container with the Dockerfile provided earlier with the
following command:
$ docker pull thisgokeboysef/pyspark-docker
Other great sources of information on how to dockerize your environment can be seen
at Lab41. The GitHub repository contains the necessary code:
https://github.com/Lab41/ipython-spark-docker
Summary
We set the context of building data-intensive apps by describing the overall
architecture structured around the infrastructure, persistence, integration, analytics,
and engagement layers. We also discussed Spark and Anaconda with their respective
building blocks. We set up an environment in a VirtualBox with Anaconda and
Spark and demonstrated a word count app using the text content of the first chapter
as input.
In the next chapter, we will delve more deeply into the architecture blueprint for
data-intensive apps and tap into the Twitter, GitHub, and Meetup APIs to get a feel
of the data we will be mining with Spark.
[ 26 ]
www.PacktPub.com
Stay Connected: