R Data Analysis Cookbook - Sample Chapter
R Data Analysis Cookbook - Sample Chapter
ee
This book empowers you by showing you ways to use R to generate professional analysis reports. It provides
examples for various important analysis and machine-learning tasks that you can try out with associated
and readily available data. The book also teaches you to quickly adapt the example code for your own needs
and save yourself the time needed to construct code from scratch.
Data analytics with R has emerged as a very important focus for organizations of all kinds. R enables even
those with only an intuitive grasp of the underlying concepts, without a deep mathematical background,
to unleash powerful and detailed examinations of their data.
Sa
pl
e
and problems
$ 44.99 US
29.99 UK
P U B L I S H I N G
Shanthi Viswanathan
Viswa Viswanathan
P U B L I S H I N G
Viswa Viswanathan
Shanthi Viswanathan
Viswa would like to express deep gratitude to professors Amitava Bagchi and Anup Sen,
who were inspirational forces during his early research career. He is also grateful to
several extremely intelligent colleagues, notable among them being Rajesh Venkatesh,
Dan Richner, and Sriram Bala, who significantly shaped his thinking. His aunt,
Analdavalli; his sister, Sankari; and his wife, Shanthi, taught him much about hard work,
and even the little he has absorbed has helped him immensely. His sons, Nitin and
Siddarth, have helped with numerous insightful comments on various topics.
Shanthi Viswanathan is an experienced technologist who has delivered technology
management and enterprise architecture consulting to many enterprise customers. She has
worked for Infosys Technologies, Oracle Corporation, and Accenture. As a consultant,
Shanthi has helped several large organizations, such as Canon, Cisco, Celgene, Amway,
Time Warner Cable, and GE among others, in areas such as data architecture and
analytics, master data management, service-oriented architecture, business process
management, and modeling. When she is not in front of her Mac, Shanthi spends time
hiking in the suburbs of NY/NJ, working in the garden, and teaching yoga.
Shanthi would like to thank her husband, Viswa, for all the great discussions on
numerous topics during their hikes together and for exposing her to R and Java. She
would also like to thank her sons, Nitin and Siddarth, for getting her into the data
analytics world.
Chapter 5, Can You Simplify That? Data Reduction Techniques, covers recipes for data
reduction. It presents cluster analysis through K-means and hierarchical clustering. It also
covers principal component analysis.
Chapter 6, Lessons from History Time Series Analysis, covers recipes to work with date
and date/time objects, create and plot time-series objects, decompose, filter and smooth
time series, and perform ARIMA analysis.
Chapter 7, It's All About Your Connections Social Network Analysis, is about social
networks. It includes recipes to acquire social network data using public APIs, create and
plot social networks, and compute important network metrics.
Chapter 8, Put Your Best Foot Forward Document and Present Your Analysis,
considers techniques to disseminate your analysis. It includes recipes to use R markdown
and KnitR to generate reports, to use shiny to create interactive applications that enable
your audience to directly interact with the data, and to create presentations with RPres.
Chapter 9, Work Smarter, Not Harder Efficient and Elegant R Code, addresses the
issue of writing efficient and elegant R code in the context of handling large data. It
covers recipes to use the apply family of functions, to use the plyr package, and to use
data tables to slice and dice data.
Chapter 10, Where in the World? Geospatial Analysis, covers the topic of exploiting
R's powerful features to handle spatial data. It covers recipes to use RGoogleMaps to get
GoogleMaps and to superimpose our own data on them, to import ESRI shape files into R
and plot them, to import maps from the maps package, and to use the sp package to create
and plot spatial data frame objects.
Chapter 11, Playing Nice Connecting to Other Systems, covers the topic of
interconnecting R to other systems. It includes recipes for interconnecting R with Java,
Excel and with relational and NoSQL databases (MySQL and MongoDB respectively).
Introduction
Data analysts need to load data from many different input formats into R. Although R has its
own native data format, data usually exists in text formats, such as CSV (Comma Separated
Values), JSON (JavaScript Object Notation), and XML (Extensible Markup Language). This
chapter provides recipes to load such data into your R system for processing.
Very rarely can we start analyzing data immediately after loading it. Often, we will need to
preprocess the data to clean and transform it before embarking on analysis. This chapter
provides recipes for some common cleaning and preprocessing steps.
Getting ready
If you have not already downloaded the files for this chapter, do it now and ensure that the
auto-mpg.csv file is in your R working directory.
How to do it...
Reading data from .csv files can be done using the following commands:
1. Read the data from auto-mpg.csv, which includes a header row:
> auto <- read.csv("auto-mpg.csv", header=TRUE, sep = ",")
How it works...
The read.csv() function creates a data frame from the data in the .csv file. If we pass
header=TRUE, then the function uses the very first row to name the variables in the resulting
data frame:
> names(auto)
[1] "No"
"mpg"
"cylinders"
Chapter 1
[4] "displacement" "horsepower"
[7] "acceleration" "model_year"
"weight"
"car_name"
The header and sep parameters allow us to specify whether the .csv file has headers and
the character used in the file to separate fields. The header=TRUE and sep="," parameters
are the defaults for the read.csv() functionwe can omit these in the code example.
There's more...
The read.csv() function is a specialized form of read.table(). The latter uses whitespace
as the default field separator. We discuss a few important optional arguments to these functions.
1
2
V1 V2 V3 V4 V5
V6
V7 V8
V9
1 28 4 140 90 2264 15.5 71 chevrolet vega 2300
2 19 3 70 97 2330 13.5 72
mazda rx2 coupe
If your file does not have a header row, and you omit the header=FALSE optional argument,
the read.csv() function uses the first row for variable names and ends up constructing
variable names by adding X to the actual data values in the first row. Note the meaningless
variable names in the following fragment:
> auto <- read.csv("auto-mpg-noheader.csv")
> head(auto,2)
1
2
1
2
If the data file uses a specified string (such as "N/A" or "NA" for example) to indicate
the missing values, you can specify that string as the na.strings argument, as in
na.strings= "N/A" or na.strings = "NA".
However, to selectively treat variables as characters, you can load the file with the defaults
(that is, read all strings as factors) and then use as.character() to convert the requisite
factor variables to characters.
Chapter 1
Getting ready
If the XML package is not already installed in your R environment, install the package now
as follows:
> install.packages("XML")
How to do it...
XML data can be read by following these steps:
1. Load the library and initialize:
> library(XML)
> url <- "http://www.w3schools.com/xml/cd_catalog.xml"
How it works...
The xmlParse function returns an object of the XMLInternalDocument class, which is a
C-level internal data structure.
The xmlRoot() function gets access to the root node and its elements. We check the first
element of the root node:
> rootNode[1]
$CD
5
To extract data from the root node, we use the xmlSApply() function iteratively over all the
children of the root node. The xmlSApply function returns a matrix.
To convert the preceding matrix into a data frame, we transpose the matrix using the t()
function. We then extract the first two rows from the cd.catalog data frame:
> cd.catalog[1:2,]
TITLE
ARTIST COUNTRY
COMPANY PRICE YEAR
1 Empire Burlesque
Bob Dylan
USA
Columbia 10.90 1985
2 Hide your heart Bonnie Tyler
UK CBS Records 9.90 1988
There's more...
XML data can be deeply nested and hence can become complex to extract. Knowledge of
XPath will be helpful to access specific XML tags. R provides several functions such as
xpathSApply and getNodeSet to locate specific elements.
The readHTMLTable() function parses the web page and returns a list of all tables that
are found on the page. For tables that have an id attribute, the function uses the id attribute
as the name of that list element.
We are interested in extracting the "10 most populous countries," which is the fifth table;
hence we use tables[[5]].
Chapter 1
Specify which to get data from a specific table. R returns a data frame.
Getting ready
R provides several packages to read JSON data, but we use the jsonlite package. Install
the package in your R environment as follows:
> install.packages("jsonlite")
If you have not already downloaded the files for this chapter, do it now and ensure that the
students.json files and student-courses.json files are in your R working directory.
How to do it...
Once the files are ready and load the jsonlite package and read the files as follows:
1. Load the library:
> library(jsonlite)
How it works...
The jsonlite package provides two key functions: fromJSON and toJSON.
The fromJSON function can load data either directly from a file or from a web page as the
preceding steps 2 and 3 show. If you get errors in downloading content directly from the Web,
install and load the httr package.
Depending on the structure of the JSON document, loading the data can vary in complexity.
If given a URL, the fromJSON function returns a list object. In the preceding list, in step 4,
we see how to extract the enclosed data frame.
Getting ready
Download the files for this chapter and store the student-fwf.txt file in your R
working directory.
How to do it...
Read the fixed-width formatted file as follows:
> student <- read.fwf("student-fwf.txt",
widths=c(4,15,20,15,4),
col.names=c("id","name","email","major","year"))
How it works...
In the student-fwf.txt file, the first column occupies 4 character positions, the second
15, and so on. The c(4,15,20,15,4) expression specifies the widths of the five columns
in the data file.
We can use the optional col.names argument to supply our own variable names.
Chapter 1
There's more...
The read.fwf() function has several optional arguments that come in handy. We discuss a
few of these as follows:
If header=TRUE, the first row of the file is interpreted as having the column headers. Column
headers, if present, need to be separated by the specified sep argument. The sep argument
only applies to the header row.
The skip argument denotes the number of lines to skip; in this recipe, the first two lines
are skipped.
Getting ready
First, create and save R objects interactively as shown in the following code. Make sure you
have write access to the R working directory:
>
>
>
>
>
>
>
>
How to do it...
To be able to read data from R files and libraries, follow these steps:
1. Load data from R data files into memory:
> load("test.Rdata")
> ord <- readRDS("order.rds")
The first command loads only the iris dataset, and the second loads the cars and
iris datasets.
How it works...
The save() function saves the serialized version of the objects supplied as arguments
along with the object name. The subsequent load() function restores the saved objects
with the same object names they were saved with, to the global environment by default.
If there are existing objects with the same names in that environment, they will be replaced
without any warnings.
The saveRDS() function saves only one object. It saves the serialized version of the object
and not the object name. Hence, with the readRDS() function the saved object can be
restored into a variable with a different name from when it was saved.
There's more...
The preceding recipe has shown you how to read saved R objects. We see more options in
this section.
10
Chapter 1
The list argument specifies a character vector containing the names of the objects to be
saved. Subsequently, loading data from the OddEven.Rdata file creates both odd and even
objects. The saveRDS() function can save only one object at a time.
The order.Rdata file contains an object named order. If an object named order already
exists in the environment, we will get the following error:
The following object is masked _by_ .GlobalEnv:
order
Getting ready
Download the missing-data.csv file from the code files for this chapter to your R working
directory. Read the data from the missing-data.csv file while taking care to identify the
string used in the input file for missing values. In our file, missing values are shown with
empty strings:
> dat <- read.csv("missing-data.csv", na.strings="")
11
How to do it...
To get a data frame that has only the cases with no missing values for any variable, use the
na.omit() function:
> dat.cleaned <- na.omit(dat)
Now, dat.cleaned contains only those cases from dat, which have no missing values in any
of the variables.
How it works...
The na.omit() function internally uses the is.na() function that allows us to find whether
its argument is NA. When applied to a single value, it returns a boolean value. When applied
to a collection, it returns a vector:
> is.na(dat[4,2])
[1] TRUE
> is.na(dat$Income)
[1] FALSE FALSE FALSE FALSE FALSE TRUE FALSE FALSE FALSE
[10] FALSE FALSE FALSE TRUE FALSE FALSE FALSE FALSE FALSE
[19] FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE FALSE
There's more...
You will sometimes need to do more than just eliminate cases with any missing values.
We discuss some options in this section.
12
Chapter 1
TRUE
TRUE
TRUE
TRUE
TRUE
TRUE
TRUE FALSE
TRUE FALSE
TRUE TRUE
TRUE FALSE
TRUE TRUE
TRUE TRUE
TRUE TRUE
TRUE FALSE
TRUE TRUE
TRUE
TRUE
TRUE
Rows 4, 6, 13, and 17 have at least one missing value. Instead of using the na.omit()
function, we could have done the following as well:
> dat.cleaned <- dat[complete.cases(dat),]
> nrow(dat.cleaned)
[1] 23
13
Getting ready
Download the missing-data.csv file and store it in your R environment's working directory.
How to do it...
Read data and replace missing values:
> dat <- read.csv("missing-data.csv", na.strings = "")
> dat$Income.imp.mean <- ifelse(is.na(dat$Income),
mean(dat$Income, na.rm=TRUE), dat$Income)
After this, all the NA values for Income will now be the mean value prior to imputation.
How it works...
The preceding ifelse() function returns the imputed mean value if its first argument is NA.
Otherwise, it returns the first argument.
There's more...
You cannot impute the mean when a categorical variable has missing values, so you need a
different approach. Even for numeric variables, we might sometimes not want to impute the
mean for missing values. We discuss an often used approach here.
Chapter 1
}
dat
}
With these two functions in place, you can use the following to impute random values for both
Income and Phone_type:
> dat <- read.csv("missing-data.csv", na.strings="")
> random.impute.data.frame(dat, c(1,2))
Getting ready
Create a sample data frame:
> salary <- c(20000, 30000, 25000, 40000, 30000, 34000, 30000)
> family.size <- c(4,3,2,2,3,4,3)
> car <- c("Luxury", "Compact", "Midsize", "Luxury",
"Compact", "Compact", "Compact")
> prospect <- data.frame(salary, family.size, car)
How to do it...
The unique() function can do the job. It takes a vector or data frame as an argument and
returns an object of the same type as its argument but with duplicates removed.
Get unique values:
> prospect.cleaned <- unique(prospect)
> nrow(prospect)
[1] 7
> nrow(prospect.cleaned)
[1] 5
How it works...
The unique() function takes a vector or data frame as an argument and returns a like object
with the duplicate eliminated. It returns the nonduplicated cases as is. For repeated cases,
the unique() function includes one copy in the returned result.
15
There's more...
Sometimes we just want to identify duplicated values without necessarily removing them.
TRUE FALSE
TRUE
From the data, we know that cases 2, 5, and 7 are duplicates. Note that only cases 5 and 7
are shown as duplicates. In the first occurrence, case 2 is not flagged as a duplicate.
To list the duplicate cases, use the following code:
> prospect[duplicated(prospect), ]
5
7
salary family.size
car
30000
3 Compact
30000
3 Compact
Getting ready
Install the scales package and read the data-conversion.csv file from the book's data
for this chapter into your R environment's working directory:
> install.packages("scales")
> library(scales)
> students <- read.csv("data-conversion.csv")
How to do it...
To rescale the Income variable to the range [0,1]:
> students$Income.rescaled <- rescale(students$Income)
16
Chapter 1
How it works...
By default, the rescale() function makes the lowest value(s) zero and the highest value(s)
one. It rescales all other values proportionately. The following two expressions provide
identical results:
> rescale(students$Income)
> (students$Income - min(students$Income)) /
(max(students$Income) - min(students$Income))
To rescale a different range than [0,1], use the to argument. The following rescales
students$Income to the range (0,100):
> rescale(students$Income, to = c(1, 100))
There's more...
When using distance-based techniques, you may need to rescale several variables. You may
find it tedious to scale one variable at a time.
With the preceding function defined, we can do the following to rescale the first and fourth
variables in the data frame:
> rescale.many(students, c(1,4))
See also
17
Getting ready
Download the BostonHousing.csv data file and store it in your R environment's working
directory. Then read the data:
> housing <- read.csv("BostonHousing.csv")
How to do it...
To standardize all the variables in a data frame containing only numeric variables, use:
> housing.z <- scale(housing)
You can only use the scale() function on data frames containing all numeric variables.
Otherwise, you will get an error.
How it works...
When invoked as above, the scale() function computes the standard Z score for each value
(ignoring NAs) of each variable. That is, from each value it subtracts the mean and divides the
result by the standard deviation of the associated variable.
The scale() function takes two optional arguments, center and scale, whose default
values are TRUE. The following table shows the effect of these arguments:
Argument
Effect
18
Chapter 1
There's more...
When using distance-based techniques, you may need to rescale several variables. You may
find it tedious to standardize one variable at a time.
This will add the z values for variables 1, 3, 5, 6, and 7 with .z appended to the original
column names:
> names(housing)
[1] "CRIM"
[7] "AGE"
[13] "LSTAT"
[19] "AGE.z"
"ZN"
"DIS"
"MEDV"
"INDUS"
"CHAS"
"NOX"
"RM"
"RAD"
"TAX"
"PTRATIO" "B"
"CRIM.z" "INDUS.z" "NOX.z"
"RM.z"
See also
Getting ready
From the code files for this chapter, store the data-conversion.csv file in the working
directory of your R environment. Then read the data:
> students <- read.csv("data-conversion.csv")
How to do it...
Income is a numeric variable, and you may want to create a categorical variable from it by
creating bins. Suppose you want to label incomes of $10,000 or below as Low, incomes
between $10,000 and $31,000 as Medium, and the rest as High. We can do the following:
1. Create a vector of break points:
> b <- c(-Inf, 10000, 31000, Inf)
1
2
3
4
5
6
7
8
9
10
20
Chapter 1
How it works...
The cut() function uses the ranges implied by the breaks argument to infer the bins, and
names them according to the strings provided in the labels argument. In our example, the
function places incomes less than or equal to 10,000 in the first bin, incomes greater than
10,000 and less than or equal to 31,000 in the second bin, and incomes greater than 31,000
in the third bin. In other words, the first number in the interval is not included and the second
one is. The number of bins will be one less than the number of elements in breaks. The
strings in names become the factor levels of the bins.
If we leave out names, cut() uses the numbers in the second argument to construct interval
names as you can see here:
> b <- c(-Inf, 10000, 31000, Inf)
> students$Income.cat1 <- cut(students$Income, breaks = b)
> students
1
2
3
4
5
6
7
8
9
10
There's more...
You might not always be in a position to identify the breaks manually and may instead want to
rely on R to do this automatically.
21
Getting ready
Read the data-conversion.csv file and store it in the working directory of your R
environment. Install the dummies package. Then read the data:
> install.packages("dummies")
> library(dummies)
> students <- read.csv("data-conversion.csv")
How to do it...
Create dummies for all factors in the data frame:
> students.new <- dummy.data.frame(students, sep = ".")
> names(students.new)
[1] "Age"
"State.NJ" "State.NY" "State.TX" "State.VA"
[6] "Gender.F" "Gender.M" "Height"
"Income"
The students.new data frame now contains all the original variables and the newly added
dummy variables. The dummy.data.frame() function has created dummy variables for all
four levels of the State and two levels of Gender factors. However, we will generally omit one of
the dummy variables for State and one for Gender when we use machine-learning techniques.
We can use the optional argument all = FALSE to specify that the resulting data frame
should contain only the generated dummy variables and none of the original variables.
22
Chapter 1
How it works...
The dummy.data.frame() function creates dummies for all the factors in the data frame
supplied. Internally, it uses another dummy() function which creates dummy variables for
a single factor. The dummy() function creates one new variable for every level of the factor
for which we are creating dummies. It appends the variable name with the factor level name
to generate names for the dummy variables. We can use the sep argument to specify the
character that separates theman empty string is the default:
> dummy(students$State, sep = ".")
[1,]
[2,]
[3,]
[4,]
[5,]
[6,]
[7,]
[8,]
[9,]
[10,]
There's more...
In situations where a data frame has several factors, and you plan on using only a subset of
these, you will create dummies only for the chosen subset.
23
www.PacktPub.com
Stay Connected: