Introduction To Statistics and Data Analysis
Introduction To Statistics and Data Analysis
Michael Schomaker
Centre for Infectious Disease Epidemiology and Research, University of Cape
Town, Cape Town, South Africa
Shalabh
Department of Mathematics and Statistics, Indian Institute of Technology
Kanpur, Kanpur, India
This work is subject to copyright. All rights are reserved by the Publisher,
whether the whole or part of the material is concerned, specifically the rights of
translation, reprinting, reuse of illustrations, recitation, broadcasting,
reproduction on microfilms or in any other physical way, and transmission or
information storage and retrieval, electronic adaptation, computer software, or
by similar or dissimilar methodology now known or hereafter developed.
The publisher, the authors and the editors are safe to assume that the advice and
information in this book are believed to be true and accurate at the date of
publication. Neither the publisher nor the authors or the editors give a warranty,
express or implied, with respect to the material contained herein or for any errors
or omissions that may have been made.
to introduce the relevant R commands. In many cases, the code can be directly
pasted into R to reproduce the results and graphs presented in the book; in
others, the code is abbreviated to improve readability and clarity, and the
detailed code can be found online.
Many years of teaching experience, from undergraduate to postgraduate
level, went into this book. The authors hope that the reader will enjoy reading it
and find it a useful reference for learning. We welcome critical feedback to
improve future editions of this book. Comments can be sent to
christian.heumann@stat.uni-muenchen.de ,
shalab@iitk.ac.in , and michael.schomaker@uct.ac.za who
contributed equally to this book.
We thank Melanie Schomaker for producing some of the figures and giving
graphical advice, Alice Blanck from Springer for her continuous help and
support, and Lyn Imeson for her dedicated commitment which improved the
earlier versions of this book. We are grateful to our families who have supported
us during the preparation of this book.
Christian Heumann
Michael Schomaker
Shalabh
München, Germany, Cape Town, South Africa, Kanpur, India
November 2016
Contents
Part I Descriptive Statistics
1.2 Variables
1.2.3 Scales
1.6 Exercises
2.3.3 Histogram
2.6 Exercises
3.1.4 Mode
3.6 Exercises
5 Combinatorics
5.1 Introduction
5.2 Permutations
5.3 Combinations
5.5 Exercises
6.5 Independence
6.7 Exercises
7 Random Variables
7.3.1 Expectation
7.3.2 Variance
7.3.4 Standardization
7.9 Exercises
8 Probability Distributions
8.3.1 -Distribution
8.3.2 t -Distribution
8.3.3 -Distribution
8.5 Exercises
9 Inference
9.1 Introduction
9.4.1 Introduction
9.7 Exercises
10 Hypothesis Testing
10.1 Introduction
10.2.2 Hypotheses
10.3.1 Test for the Mean When the Variance is Known (One-Sample
Gauss Test)
10.3.2 Test for the Mean When the Variance is Unknown (One-
Sample -Test)
10.10 Exercises
11 Linear Regression
11.6.3 Transformations
11.7.3 Interactions
11.8 Comparing Different Models
11.12 Exercises
Appendix A: Introduction to R
References
Index
About the Authors
Christian Heumann
is a professor at the Ludwig-Maximilians-Universität München, Germany, where
he teaches students in Bachelor and Master programs offered by the Department
of Statistics, as well as undergraduate students in the Bachelor of Science
programs in business administration and economics. His research interests
include statistical modeling, computational statistics and all aspects of missing
data.
Michael Schomaker
is a Senior Researcher and Biostatistician at the Centre for Infectious Disease
Epidemiology & Research (CIDER), University of Cape Town, South Africa. He
received his doctoral degree from the University of Munich. He has taught
undergraduate students for many years and has written contributions for various
introductory textbooks. His research focuses on missing data, causal inference,
model averaging and HIV/AIDS.
Shalabh
is a Professor at the Indian Institute of Technology Kanpur, India. He received
his Ph.D. from the University of Lucknow (India) and completed his post-
doctoral work at the University of Pittsburgh (USA) and University of Munich
(Germany). He has over twenty years of experience in teaching and research. His
main research areas are linear models, regression analysis, econometrics,
measurement error models, missing data models and sampling theory.
Part I
Descriptive Statistics
© Springer International Publishing Switzerland 2016
Christian Heumann, Michael Schomaker and Shalabh, Introduction to Statistics and Data Analysis ,
DOI 10.1007/978-3-319-46162-5_1
Christian Heumann
Email: Christian.heumann@stat.uni-muenchen.de
Example 1.1.1
If we are interested in the social conditions under which Indian people live,
then we would define all inhabitants of India as and each of its
inhabitants as . If we want to collect data from a few inhabitants, then
those would represent a sample from the total population.
Investigating the economic power of Africa’s platinum industry would
require to treat each platinum-related company as , whereas all platinum-
related companies would be collected in . A few companies
comprise a sample of all companies.
We may be interested in collecting information about those participating in
a statistics course. All participants in the course constitute the population
, and each participant refers to a unit or observation .
Remark 1.1.1
Sometimes, the concept of a population is not applicable or difficult to imagine.
As an example, imagine that we measure the temperature in New Delhi every
hour. A sample would then be the time series of temperatures in a specific time
window, for example from January to March 2016. A population in the sense of
observational units does not exist here. But now assume that we measure
temperatures in several different cities; then, all the cities form the population,
and a sample is any subset of the cities.
1.2 Variables
If we have specified the population of interest for a specific research question,
we can think of what is of interest about our observations. A particular feature of
these observations can be collected in a statistical variable X. Any information
we are interested in may be captured in such a variable. For example, if our
observations refer to human beings, X may describe marital status, gender, age,
or anything else which may relate to a person. Of course, we can be interested in
many different features, each of them collected in a different variable
. Each observation takes a particular value for X. If X refers to
gender, each observation, i.e. each person, has a particular value x which refers
to either “male” or “female”.
The formal definition of a variable is
(1.1)
This definition states that a variable X takes a value x for each observation
, whereby the number of possible values is contained in the set S.
Example 1.2.1
Remark 1.2.1
It is common to assign numbers to qualitative variables for practical purposes in
data analyses (see Sect. 1.4 for more detail). For instance, if we consider the
variable “gender”, then each observation can take either the “value” male or
female. We may decide to assign 1 to female and 0 to male and use these
numbers instead of the original categories. However, this is arbitrary, and we
could have also chosen “1” for male and “0” for female, or “2” for male and
“10” for female. There is no logical and natural order on how to arrange male
and female, and thus, the variable gender remains a qualitative variable, even
after using numbers for coding the values that X can take.
1.2.3 Scales
The thoughts and considerations from above indicate that different variables
contain different amounts of information. A useful classification of these
considerations is given by the concept of the scale of a variable. This concept
will help us in the remainder of this book to identify which methods are the
appropriate ones to use in a particular setting.
Interval scale. Only differences between values, but not ratios, can be
interpreted. An example for this scale would be temperature (measured
in ): the difference between and is , but the ratio of
does not mean that is twice as cold as .
Ratio scale. Both differences and ratios can be interpreted. An example is
speed: 60 km/h is 40 km/h more than 20 km/h. Moreover, 60 km/h is
three times faster than 20 km/h because the ratio between them is 3.
Absolute scale. The absolute scale is the same as the ratio scale, with the
exception that the values are measured in “natural” units. An example is
“number of semesters studied” where no artificial unit such as or
is needed: the values are simply .
Example 1.4.2
Consider the data set described in Appendix A.4. A pizza delivery service
captures information related to each delivery, for example the delivery time, the
temperature of the pizza, the name of the driver, the date of the delivery, the
name of the branch, and many more. To capture the data of all deliveries during
one month, we create a data matrix. Each row refers to a particular delivery,
therefore representing the observations of the data. Each column refers to a
variable. In Fig. 1.4, the variables (delivery time in minutes), (temperature
in ), and (name of branch) are listed.
The first row tells us about the features of the first pizza delivery: the
delivery time was 35.1 min, the pizza arrived with a temperature of , and
the pizza was delivered from the branch in the East of the city. In total, there
were deliveries. For nominal variables, such as branch, we may decide
to produce a coding list, as illustrated in Table 1.1: instead of referring to the
branches as “East”, “West”, and “Centre”, we may simply call them 1, 2, and 3.
As we will see in Chap. 11, this has benefits for some analysis methods, though
this is not needed in general.
Table 1.1 Coding list for branch
Variable Values Code
Branch East 1
West 2
Centre 3
Missing 4
If some values are missing, for example because they were never captured or
even lost, then this requires special attention. In Table 1.1, we assign missing
values the number “4” and therefore treat them as a separate category. If we
work with statistical software (see below), we may need other coding such as NA
in the statistical software R or in Stata. More detail can be found in Appendix A.
Example 1.4.3
The temperature in relates to the temperature in as follows:
(c) Comparison of two drugs which deal with high blood pressure.
Exercise 1.2
A national park conducts a study on the behaviour of their leopards. A few of the
park’s leopards are registered and receive a GPS device which allows measuring
the position of the leopard. Use this example to describe the following concepts:
population, sample, observation, value, and variable.
Exercise 1.3
Which of the following variables are qualitative, and which are quantitative?
Specify which of the quantitative variables are discrete and which are
continuous:
Time to travel to work, shoe size, preferred political party, price for a
canteen meal, eye colour, gender, wavelength of light, customer satisfaction
on a scale from 1 to 10, delivery time for a parcel, blood type, number of
goals in a hockey match, height of a child, subject line of an email.
Exercise 1.4
Identify the scale of the following variables:
Exercise 1.5
Make yourself familiar with the pizza data set from Appendix A.4.
(b) View the data both in the R data editor and in the R console.
(c) Create a new data matrix which consists of the first 5 rows and first 5
variables of the data. Print this data set on the R console. Now, save this
data set in your preferred format.
(d) Add a new variable “NewTemperature” to the data set which converts the
temperature from to .
(e) Attach the data and list the values from the variable “NewTemperature”.
(f) Use “?” to make yourself familiar with the following commands: str,
dim, colnames, names, nrow, ncol, head, and tail.
Apply these commands to the data to get more information about it.
Exercise 1.6
Consider the research questions of describing parents’ attitudes towards
immunization, what proportion of them wants immunization against chicken pox
for their last-born child, and whether this proportion differs by gender and age.
(a) Which data collection method is the most suitable one to answer the above
questions: survey or experiment?
(b) How would you capture the attitudes towards immunization in a single
variable?
(c) Which variables are needed to answer all the above questions? Describe the
scale of each of them.
(d) Reflect on what an appropriate data set would look like. Now, given this
data set, try to write down the above research questions as precisely as
possible.
© Springer International Publishing Switzerland 2016
Christian Heumann, Michael Schomaker and Shalabh, Introduction to Statistics and Data Analysis ,
DOI 10.1007/978-3-319-46162-5_2
Christian Heumann
Email: Christian.heumann@stat.uni-muenchen.de
Example 2.1.1
Suppose there are ten people in a supermarket queue. Each of them is either
coded as “F” (if the person is female) or “M” (if the person is male). The
collected data may look like
There are now two categories in the data: male (M) and female (F). We use to
refer to the male category and to refer to the female category. Since there are
seven male and three female students, we have 7 values in category , denoted
as , and 3 values in category , denoted as . The number of
observations in a particular category is called the absolute frequency . It follows
that and are the absolute frequencies of and , respectively.
Note that , which is the same as the total number of collected
observations. We can also calculate the relative frequencies of and as
and respectively.
This gives us information about the proportions of male and female customers in
the queue.
We now extend these concepts to a general framework for the summary of data
on discrete variables. Suppose there are k categories denoted as
with observations in category . The absolute frequency is
defined as the number of units in the jth category . The sum of absolute
frequencies equals the total number of units in the data: . The relative
frequencies of the jth class are defined as
(2.1)
The relative frequencies always lie between 0 and 1 and .
Grouped Continuous Data. Data on continuous variables usually has a
large number (k) of different values. Sometimes k may even be the same as n and
in such a case the relative frequencies become for all j. However, it is
possible to define intervals in which the observed values are contained.
Example 2.1.2
Consider the following results of the written part of a driving licence
examination (a maximum of 100 points could be achieved):
We can summarize the results in class intervals such as 0–20, 21–40, 41–60, 61–
80, and 81–100, and the data can be presented as follows:
Relative frequencies
We have and .
Example 2.1.3
Consider the pizza delivery service data (Example 1.4.2, Appendix A.4). We are
interested in the pizza deliveries by branch and generate the respective frequency
table, showing the distribution of the data, using the table command in R
(after reading in and attaching the data) as
(2.2)
This definition implies that F(x) is a monotonically non-decreasing function,
(the lower limit of F is 0), (the
upper limit of F is 1), and F(x) is right continuous.
Example 2.2.1
Consider a customer satisfaction survey from a car service company. The 200
customers who had a car service done within the last 30 days were asked to
respond regarding their overall level of satisfaction with the quality of the car
service on a scale from 1 to 5 based on the following options:
4 16 90 70 20
The ECDF for this data can be obtained by summarizing the data in a vector
and using the plot.ecdf() function in R, see Fig. 2.1:
The ECDF can be used to obtain the relative frequencies for values contained in
certain intervals as
Example 2.2.2
Suppose, in Example 2.2.1, we want to know how many customers are not
satisfied with their car service. Then, using the data relating to the responses “1”
and “2”, we observe from the ECDF that of the customers
were not satisfied with the car service. This relates to using rule (2.3):
or 10 %. Similarly, the proportion of customers who are
more than satisfied can be obtained using (2.5) as
or 45 %.
(2.11)
with . The idea behind (2.11) is presented in Fig. 2.2. For any interval
, the respective lower and upper limits of the ECDF are and .
If we assume values to be distributed uniformly over this interval, we can
connect and with a straight line. To obtain F(x) with and
, we simply add the height of the ECDF between and F(x) to
.
Fig. 2.2 Illustration of the ECDF for continuous data available in groups/intervals
Table 2.2 The values needed to calculate the ECDF for the grouped pizza delivery time data in Example
2.2.3
Delivery time j
Example 2.2.3
Consider Example 2.1.3 of the pizza delivery service. Suppose we are interested
in determining the distribution of the pizza delivery times. Using the function
plot.ecdf() in R, we obtain the ECDF of the continuous data, see Fig. 2.3a.
Note that the structure of the curve is a step function but now almost looks like a
continuous curve. The reason for this is that when the number of observations is
large, then the lengths of class intervals become small. When these small lengths
are joined together, they appear like a continuous curve. As the number of
observations increases, the smoothness of the curve increases too. If the number
of observations is not large, e.g. suppose the data is reported as a summary from
the drivers, i.e. whether the delivery took <15 min, between 15 and 20 min,
between 20 and 25 min, and so on, then we can construct the ECDF by creating
a table summarizing the data features as in Table 2.2.
Figure 2.3b shows the ECDF based on the grouped data evaluated in
Table 2.2. It is interesting to see that the graphs emerging from the use of the
grouped data and ungrouped data are similar in this specific example.
Suppose we are interested in calculating how many deliveries were
completed within the desired time limit of 30 min, with a tolerance of maximum
10 % deviation, i.e. a deviation of 3 min. We can evaluate the ECDF at
min. Based on (2.11), we calculate
. Thus,
we conclude, based on the grouped data, that only about 42 % of the deliveries
were completed in the desired time frame.
Example 2.3.1
Consider Example 2.1.1 in which ten people, queueing in a supermarket, were
classified as being either male (M) or female (F). The absolute frequencies for
males and females are and , respectively. Since there are two
categories, M and F, two bars are needed to construct the chart—one for the
male category and another for the female category. The heights of the bars are
determined as either and or and . These graphs are
shown in Fig. 2.4.
Fig. 2.4 Bar charts
Example 2.3.2
Consider the data in Example 2.1.3, where the pizza delivery times for each
branch are recorded over a period of 1 month. The frequency table forms the
basis for the bar chart, either using the absolute or relative frequencies on the y-
axis. Figure 2.5 shows the bar charts for the number and proportion of pizza
deliveries per branch. The graphs can be produced in R by applying the
barplot command to a frequency table:
Remark 2.3.1
Instead of vertical bars, horizontal bars can be drawn using the optional
argument horiz=TRUE in the barplot command.
Fig. 2.5 Bar charts for the pizza deliveries per branch
Example 2.3.3
To illustrate the construction of a pie chart, let us consider again Example 2.1.1
in which ten people in a supermarket queue were classified as being either male
(M) or female (F): M, F, M, F, M, M, M, F, M, M. The pie chart for this data will
have two segments: one for males and another one for females. The relative
frequencies are and , respectively. The size of the segment for
the first category (M) is , and the size of the segment
for the second category (F) is . The pie chart is
shown in Fig. 2.6a.
Example 2.3.4
Consider again Example 2.2.1, where 200 customers were asked about their
level of satisfaction (5 categories) with their car service. The pie chart for this
example consists of five segments representing the categories 1, 2, 3, 4, and 5.
The size of the segment is . For example, for category 1,
there are 4 out of 200 customers, who are not satisfied at all. The angle of the
segment “not satisfied at all” therefore is . Similarly,
we can calculate the angle of the other segments and obtain a pie chart as shown
in Fig. 2.6b using the pie command in R
Remark 2.3.2
Note that the area of a segment is not proportional to the absolute frequency of
the respective category. Instead, the area of the segment is proportional to the
angle (and depends also on the radius of the whole circle). It has been
argued that this may cause improper interpretations as the human eye may catch
the segment’s area more easily than the angle of a segment. Pie charts should
therefore be used with caution.
2.3.3 Histogram
If a variable consists of a large number of different values, the number of
categories used to construct bar charts will consequently be large too. A bar chart
may thus not give a clear summary when applied to a continuous variable.
Instead, a histogram is the appropriate choice to represent the distribution of
values of continuous variables. It is based on the idea to categorize the data into
different groups and plot the bars for each category with height , where
denotes the width of the class interval or category. An important
consideration for this concept is that the area of the bars ( height width) is
proportional to the relative frequency. This means that the widths of the bars
need not necessarily to be the same because different widths can be adjusted
with different heights of the bars.
Example 2.3.5
Consider Example 2.1.2, where people were divided into five class
intervals 0–20, 21–40, 41–60, 61–80, and 81–100 based on their performance in
a written driving licence examination. The frequency table is given as
Relative freq
Height
The histogram for this grouped data set has five categories and therefore it
has five bars. Since the widths of class intervals are the same, the heights of the
bars are proportional to the relative frequency of the respective category. The
resulting histogram is displayed in Fig. 2.7.
Example 2.3.6
Recall Example 2.2.3 and the variable “pizza delivery time”. Table 2.3 shows the
summary of the grouped data and the values needed to calculate the histogram.
Figure 2.8a shows the histogram with equal widths of delivery time intervals.
We see a symmetric distribution of the pizza delivery times, but many delivery
times exceeding the target time of 30 min. If the histogram is required to have
different widths for different bars, i.e. different delivery time intervals for
different categories, then it can also be constructed as shown in Fig. 2.8b. This
representation is different from Fig. 2.8a. The following commands in R are used
to construct the histograms for absolute and relative frequencies, respectively:
Table 2.3 Values needed to calculate the histogram for the grouped pizza delivery time data
Delivery time j
Remark 2.3.3
The R command truehist() from the library MASS presents an alternative to
the hist() command. The default specifications are somewhat different, and
many users prefer it to the command hist.
(2.12)
where n is the sample size, h is the bandwidth, and K is a kernel function, for
example
To better understand this concept, consider Fig. 2.9a. The tick marks on the x-
axis represent five observations: 3, 6, 7, 8, and 10. On each observation as
well as its surrounding values, we apply a kernel function, which is the
Epanechnikov kernel in the figure. This means that we have five functions (grey,
dashed lines), which refer to the five observations. These functions are largest at
the observation itself and become gradually smaller as the distance from the
observation increases. Summing up the functions, as described in Eq. (2.12),
yields the solid black line, which is the kernel density plot of the five
observations. It is a smooth curve, which represents the data distribution. The
degree of smoothness can be controlled by the bandwidth h, which is chosen as 2
in Fig. 2.9a.
Fig. 2.9 Construction of kernel density plots
The choice of the kernel may affect the overall look of the plot. Above, we
have given the functions for the rectangular and Epanechnikov kernels.
However, another common function for kernel density plots is the normal
distribution function, which is introduced in Sect. 8.2.2, see Fig. 2.9b for a
comparison of different kernels. The kernel which is based on the normal
distribution is called the “Gaussian kernel” and is the default in R, where a
kernel density plot can be produced combining the plot and density
commands:
Please note that kernel functions are not defined arbitrarily and need to
satisfy certain conditions, such as those required for probability density
functions as explained in Chap. 7, Theorem 7.2.1.
Example 2.4.1
Let us consider the pizza data which we introduced earlier and in Appendix A.4.
We can summarize the delivery time by using a kernel density plot using the R
command plot(density(time)) and compare it with a histogram, see
Fig. 2.10a. We see that the delivery times are symmetric around 35 min. If we
shorten the bandwidth to a half of the default bandwidth (option adjust=0.5),
the kernel density plot becomes more wiggly, which is illustrated in Fig. 2.10b.
Fig. 2.10 Kernel density plot for delivery time
(a) Summarize the results of the 2014 elections in a bar chart. Do it manually
and by using R.
(b) How would you compare the results of the 2009 and 2014 elections? Offer
a simple solution that can be represented in a single plot. Construct this
plot in R.
Exercise 2.2
Consider a variable X describing the time until the first goal was scored in the
matches of the 2006 football World Cup competition. Only matches with at least
one goal are considered, and goals during the minute of extra time are
denoted as :
6 24 90+1 8 4 25 3 83 89 34 25 24 18 6
23 10 28 4 63 6 60 5 40 2 22 26 23 26
44 49 34 2 33 9 16 55 23 13 23 4 8 26
70 4 6 60 23 90+5 28 49 6 57 33 56 7
(b) Write down the frequency table of X based on the following categories:
[0, 15), [15, 30), [30, 45), [45, 60), [60, 75), [75, 90), [90, 96).
(c) Draw the histogram for X with intervals relating to the groups from the
frequency table.
(d) Now use R to reproduce the histogram. Compare the histogram to a kernel
density plot of your choice.
(e) Calculate the empirical cumulative distribution function for the grouped
data.
(g) Consider the grouped data. Now assume that the values within each
interval are distributed uniformly. Determine the proportion of first goals
which occurred
(h) Determine the time point at which in 80 % of the matches the first goal was
scored at or before this time point.
Exercise 2.3
Suppose we have the following information to construct a histogram for a
continuous variable with 2000 observations:
j
1 0 1 1 0.125
2 1 4 3 0.125
3 4 7 3 0.125
4 7 8 1 0.125
Exercise 2.4
A university survey was conducted on 500 first-year students to obtain
knowledge about the size of their accommodation (in square metres).
1 8–14 0.25
2 14–22 0.40
3 22–34 0.75
4 34–50 0.97
5 50–82 1.00
Exercise 2.5
Consider a survey in which 100 people were asked to rate on a scale from 1 to
10 how much they agree with the statement that “there is too much football on
television”. The results are summarized below:
Score 0 1 2 3 4 5 6 7 8 9 10
Responses 0 1 3 8 8 27 30 11 6 4 2
(c) Consider the situation, where the data is summarized in the two categories
“disagree” (score ) and “agree” (score ). What would the ECDF look
like under the approach outlined in (2.11)? Determine F(3) and F(9) for the
summarized data.
Exercise 2.6
It is possible to produce professional graphics in R. However, it is advantageous
to go beyond the default options. To demonstrate this, consider Example 2.1.3
about the pizza delivery data, which is described in Appendix A.4.
(c) Now create a normal bar chart for the variable “driver” in R. Type ?
barplot and ?par to see the options one can pass on to barchart()
to adjust the graph. Make the graph look good.
(d) Now create the same bar chart with ggplot2. Use qplot instead of
ggplot to create the plot. Use an option which makes each bar to consist
of segments relating to the day of delivery, so that one can see the number
of deliveries by driver to highlight during which days the drivers delivered
most often. Browse through “themes” and “scales” on the help page, and
add layers that make the background black and white and the bars on a
grey scale.
Christian Heumann
Email: Christian.heumann@stat.uni-muenchen.de
A data set may contain many variables and observations. However, we are not
always interested in each of the measured values but rather in a summary which
interprets the data. Statistical functions fulfil the purpose of summarizing the
data in a meaningful yet concise way.
Example 3.0.1
Suppose someone from Munich (Germany) plans a holiday in Bangkok
(Thailand) during the month of December and would like to get information
about the weather when preparing for the trip. Suppose last year’s maximum
temperatures during the day (in degrees Celsius) for December 1–31 are as
follows:
How do we draw conclusions from this data? Looking at the individual values
gives us a feeling about the temperatures one can experience in Bangkok, but it
does not provide us with a clear summary. It is evident that the average of these
31 values as “Sum of all values/Total number of observations”
is meaningful in the sense that we know what
temperature to expect “on average”. To choose the right clothing for the
holidays, we may also be interested in knowing the temperature range to
understand the variability in temperature, which is between 21 and 31 .
Summarizing 31 individual values with only three numbers (26.48, 21, and 31)
will provide sufficient information to plan the holidays.
(3.1)
In informal language, we often speak of “the average” or just “the mean” when
using the formula (3.1).
To calculate the arithmetic mean for grouped data, we need the following
frequency table:
(3.2)
Example 3.1.1
Consider again Example 3.0.1 where we looked at the temperature in Bangkok
during December. The measurements were
Relative frequencies
Interestingly, the results of the mean and the weighted mean differ. This is
because we use the middle of each class as an approximation of the mean within
the class. The implication is that we assume that the values are uniformly
distributed within each interval. This assumption is obviously not met. If we had
knowledge about the mean in each class, like in this example, we would obtain
the correct result as follows:
(i) The sum of the deviations of each variable around the arithmetic mean is
zero:
(3.3)
(3.4)
Example 3.1.2
Recall Examples 3.0.1 and 3.1.1 where we considered the temperatures in
December in Bangkok. We measured them in degrees Celsius, but someone from
the USA might prefer to know them in degrees Fahrenheit. With a linear
transformation, we can create a new temperature variable as
Using , we get .
(3.5)
Example 3.1.3
Consider again Examples 3.0.1–3.1.2 where we evaluated the temperature in
Bangkok in December. The ordered values , are as follows:
C 21 22 22 23 24 24 25 25 25 25 25 25 26 26 26 26
(i) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
C 27 27 27 28 28 28 29 29 29 29 29 30 30 30 31
(i) 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
We have , and therefore . Therefore,
at least 50 % of the 31 observations are greater than or equal to 26 and at least
50 % are less than or equal to 26. If one value was missing, let us say the last
observation, then the median would be calculated as
. In R, we would have obtained the results
using the median command:
If we deal with grouped data, we can calculate the median under the assumption
that the values within each class are equally distributed. Let be k
classes with observations of size , respectively. First, we need to
determine which class is the median class, i.e. the class that includes the median.
We define the median class as the class for which
(3.6)
hold. Then, we can determine the median as
(3.7)
where denotes the lower limit of the interval and is the width of the
interval .
Example 3.1.4
Recall Example 3.1.1 where we looked at the grouped temperature data:
0 1 1
Comparing the Mean with the Median. In the above examples, the mean and
the median turn out to be quite similar to each other. This is because we looked
at data which is symmetrically distributed around its centre, i.e. on average, we
can expect 26 with deviations that are similar above and below the average
temperature. A similar example is given in Fig. 3.1a: we see that the raw data is
summarized by using ticks at the bottom of the graph and by using a kernel
density estimator. The mean and the median are similar here because the
distribution of the observations is symmetric around the centre. If we have
skewed data (Fig. 3.1b), then the mean and the median may differ. If the data has
more than one centre, such as in Fig. 3.1c, neither the median nor the mean has
meaningful interpretations. If we have outliers (Fig. 3.1d), then it is wise to use
the median because the mean is sensitive to outliers. These examples show that
depending on the situation of interest either the mean, the median, both or
neither of them can be useful.
Fig. 3.1 Arithmetic mean and median for different data
(3.8)
Example 3.1.5
Recall Examples 3.0.1–3.1.4 where we evaluated the temperature in Bangkok in
December. The ordered values , are as follows:
C 21 22 22 23 24 24 25 25 25 25 25 25 26 26 26 26
(i) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
C 27 27 27 28 28 28 29 29 29 29 29 30 30 30 31
(i) 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
To determine the quartiles, i.e. the 25, 50, and 75 % quantiles, we calculate
as , , and . Using (3.8), it follows
that
In R, we obtain the same results using the quantile function. The probs
argument is used to specify . By default, the quartiles are reported.
However, please note that R offers nine different ways to obtain quantiles,
each of which can be chosen by the type argument. See Hyndman and Fan
(1996) for more details.
Example 3.1.6
Consider again the pizza data which is described in Appendix A.4. We may be
interested in the delivery time for different drivers to see if their performance is
the same. Figure 3.2a shows a QQ-plot for the delivery time of driver Luigi and
the delivery time of driver Domenico. Each point refers to the % quantile of
both drivers. If the point lies on the bisection line, then they are identical and we
conclude that the quantiles of the both drivers are the same. If the point is below
the line, then the quantile is higher for Luigi, and if the point is above the line,
then the quantile is lower for Luigi. So if all the points lie exactly on the line, we
can conclude that the distributions of both the drivers are the same. We see that
all the reported quantiles lie below the line, which implies that all the quantiles
of Luigi have higher values than those of Domenico. This means that not only on
an average, but also in general, the delivery times are higher for Luigi. If we
look at two other drivers, as displayed in Fig. 3.2b, the points lie very much on
the bisection line. We can therefore conclude that the delivery times of these two
drivers do not differ much.
Fig. 3.2 QQ-plots for the pizza delivery time for different drivers
In R, we can generate QQ-plots by using the qqplot command:
(a) If all the pairs of quantiles lie (nearly) on a straight line at an angle of 45 %
from the x-axis, then the two samples have similar distributions (Fig. 3.3a).
(b) If the y-quantiles are lower than the x-quantiles, then the y-values have a
tendency to be lower than the x-values (Fig. 3.3b).
(c) If the x-quantiles are lower than the y-quantiles, then the x-values have a
tendency to be lower than the y-values (Fig. 3.3c).
(d) If the QQ-plot is like Fig. 3.3d, it indicates that there is a break point up to
which the y-quantiles are lower than the x-quantiles and after that point, the
y-quantiles are higher than the x-quantiles.
3.1.4 Mode
Consider a situation in which an ice cream shop owner wants to know which
flavour of ice cream is the most popular among his customers. Similarly, a
footwear shop owner may like to find out what design and size of shoes are in
highest demand. To answer this type of questions, one can use the mode which is
another measure of central tendency.
The mode of n observations is the value which occurs the
most compared with all other values, i.e. the value which has maximum absolute
frequency. It may happen that two or more values occur with the same frequency
in which case the mode is not uniquely defined. A formal definition of the mode
is
(3.9)
The mode is typically applied to any type of variable for which the number
of different values is not too large. If continuous data is summarized in groups,
then the mode can be used as well.
Example 3.1.7
Recall the pizza data set described in Appendix A.4. The pizza delivery service
has three branches, in the East, West, and Centre, respectively. Suppose we want
to know which branch delivers the most pizzas. We find that most of the
deliveries have been made in the West, see Fig. 3.4a; therefore the mode is
. Similarly, suppose we also want to find the mode for the categorized
pizza delivery time: if we group the delivery time in intervals of 5 min, then we
see that the most frequent delivery time is the interval “ ” min, see
Fig. 3.4b. The mode is therefore .
The geometric mean plays an important role in fields where we are interested in
products of observations, such as when we look at percentage changes in
quantities. We illustrate its interpretation and use by looking at the average
growth of a quantity in the sense that we allow a starting value, such as a certain
amount of money or a particular population, to change over time. Suppose we
have a starting value at some baseline time point 0 (zero), which may be denoted
as . At time t, this value may have changed and we therefore denote it as
The ratio of and ,
and gives us an idea about the growth or decline of our value at time t. We can
summarize these concepts in the following table:
0 – –
Example 3.1.8
Suppose someone wants to deposit money, say €1000, in a bank. The bank
advisor proposes a 5-year savings plan with the following plan for interest rates:
1 % in the first year, 1.5 % in the second year, 2.5 % in the third year, and 3 % in
the last 2 years. Now he would like to calculate the average growth factor and
average growth rate for the invested money. The concept of the geometric mean
can be used as follows:
which means that he will have on average about 2.2 % growth per year. The
savings after 5 years can be calculated as
(3.13)
Example 3.1.9
Suppose an investor bought shares worth €1000 for two consecutive months.
The price for a share was €50 in the first month and €200 in the second month.
What is the average purchase price? The number of shares purchased in the first
month is . The number of shares purchased in the second month is
. The total number of shares purchased is thus , and the
total investment is €2000. It is evident that the average purchase price is
€80. This is in fact the harmonic mean calculated as
Week 1 2 3 4 5 6 7 8 9 10
Christine 0 0 0 0 0 0 0 0 0 0
Andreas
Sandro 3 5 6 2 4 6 8 4 5 7
Example 3.2.2
Consider another example in which a supplier for the car industry needs to
deliver 10 car doors with an exact width of 1.00 m. He supplies 5 doors with a
width of 1.05 m and the remaining 5 doors with a width of 0.95 m. The
arithmetic mean of all the 10 doors is 1.00 m. Based on the arithmetic mean, one
may conclude that all the doors are good but the fact is that none of the doors are
usable as they will not fit into the car. This knowledge can be summarized by a
measure of dispersion.
Remark 3.2.1
Note that the interquartile range is defined as the interval in some
literature. However, in line with most of the statistical literature, we define the
interquartile range to be a measure of dispersion, i.e. the difference between
and .
Example 3.2.3
Recall Examples 3.0.1–3.1.5 where we looked at the temperature in Bangkok
during December. The ordered values , are as follows:
C 21 22 22 23 24 24 25 25 25 25 25 25 26 26 26 26
(i) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
C 27 27 27 28 28 28 29 29 29 29 29 30 30 30 31
(i) 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
(3.16)
This measure has the drawback that the deviations , can
be either positive or negative and, consequently, their sum can potentially be
very small or even zero. Using D as a measure of variability is therefore not a
good idea since D may be small even for a large variability in the data.
Using absolute values of the deviations solves this problem, and we
introduce the following measure of dispersion:
(3.17)
It can be shown that the absolute deviation attains its minimum when A
corresponds to the median of the data:
(3.18)
We call the absolute median deviation . When , we speak of
the absolute mean deviation given by
(3.19)
Another solution to avoid the positive and negative signs of deviation in
(3.16) is to consider the squares of deviations , rather than using the
absolute value. This provides another measure of dispersion as
(3.20)
which is known as the mean squared error (MSE) with respect to A . The
MSE is another important measure in statistics, see Chap. 9, Eq. (9.4), for
details. It can be shown that attains its minimum value when . This is
the (sample) variance
(3.21)
After expanding , we can write (3.21) as
(3.22)
The positive square root of the variance is called the (sample) standard
deviation, defined as
(3.23)
The standard deviation has the same unit of measurement as the data whereas
the unit of the variance is the square of the units of the observations. For
example, if X is weight, measured in kg, then and are also measured in kg,
while is measured in (which may be more difficult to interpret). The
variance is a measure which we use in other chapters to obtain measures of
association between variables and to draw conclusions from a sample about a
population of interest; however, the standard deviation is typically preferred for a
descriptive summary of the dispersion of data.
The standard deviation measures how much the observations vary or how
they are dispersed around the arithmetic mean. A low value of the standard
deviation indicates that the values are highly concentrated around the mean. A
high value of the standard deviation indicates lower concentration of the
observations around the mean, and some of the observed values may even be far
away from the mean. If there are extreme values or outliers in the data, then the
arithmetic mean is more sensitive to outliers than the median. In such a case, the
absolute median deviation (3.18) may be preferred over the standard deviation.
Example 3.2.4
Consider again Example 3.2.1 where we evaluated the arrival times of Christine,
Andreas, and Sandro in their lecture. Using the arithmetic mean, we concluded
that both Andreas and Christine arrive on time, whereas Sandro is always late;
however, we saw that the variation of arrival times differs substantially among
the three students. To describe and quantify this variability formally, we
calculate the variance and absolute median deviation:
Variance for Grouped Data. The variance for grouped data can be calculated
using
(3.24)
where is the middle value of the interval. However, when the data is
artificially grouped and the knowledge about the original ungrouped data is
available, we can also use the arithmetic mean of the class:
(3.25)
The two expressions (3.24) and (3.25) represent the variance between the
different classes , i.e. they describe the variability of the class specific means ,
weighted by the size of each class , around the overall mean . It is evident
that the variance within each class is not taken into account in these formulae.
The variability of measurements in each class, i.e. the variability of , is
another important component to determine the overall variance in the data. It is
therefore not surprising that using only the between variance will
underestimate the total variance and therefore
(3.26)
If the data within each class is known, we can use the Theorem of Variance
Decomposition (see p. 136 for the theoretical background) to determine the
variance. This allows us to represent the total variance as the sum of the
variance between the different classes and the variance within the different
classes as
(3.27)
(3.28)
The proof of (3.27) is given in Appendix C.1, p. 423.
Example 3.2.5
Recall the weather data used in Examples 3.0.1–3.2.3 and the grouped data
specified as follows:
– 23.83 28 31 –
– 1.972 2 0 –
We know that and . The first step is to calculate the mean and
variances in each class using (3.28). We then obtain and as listed above.
The within and between variances are as follows:
(3.29)
Example 3.2.6
Let denote measurements on time. These data could have been
recorded and analysed in hours, but we may be interested in a summary in
minutes. We can make a linear transformation . Then, and
. If the mean and variance of the ’s have already been obtained, then
the mean and variance of the ’s can be obtained directly using these
transformations.
(3.30)
It follows that and . There are many
statistical methods which require standardization, see, for example, Sect. 10.3.1
for details in the context of statistical tests.
Example 3.2.7
Let X be a variable which measures air pollution by using the concentration of
atmospheric particulate matter (in ). Suppose we have the following 10
measurements:
Please note that the scale command uses for calculating the
variance, as already outlined above. Thus, the results provided by scale are not
identical to those using (3.30).
(3.31)
The coefficient of variation is a unit-free measure of dispersion. It is often used
when the measurements of two variables are different but can be put into relation
by using a linear transformation . It is possible to show that if all values
of a variable X are transformed into a variable Y with values , ,
then v does not change.
Example 3.2.8
If we want to compare the variability of hotel prices in two selected cities in
Germany and England, we could calculate the mean prices, together with their
standard deviation. Suppose a sample of prices of say 100 hotels in two selected
cities in Germany and England is available and suppose we obtain the mean and
standard deviations of the two cities as €130, , €99, and
. Then, and . This indicates higher
variability in hotel prices in England. However, if the data distribution is skewed
or bimodal, then it may be wise not to choose the arithmetic mean as a measure
of central tendency and likewise the coefficient of variation.
The boxplot command in R draws a box plot. The range option controls
whether extreme values should be plotted, and if yes, how one wants to define
such values.
Example 3.3.1
Recall Examples 3.0.1–3.2.5 where we looked at the temperature in Bangkok
during December. We have already calculated the median (26 C) and the
quartiles (25, 29 C). The minimum and maximum values are 21 C and 31 C.
The box plot for this data is shown in Fig. 3.5a. One can see that the temperature
distribution is slightly skewed with more variability for lower temperatures. The
interquartile range is 4, and therefore, any value or
would be an extreme value. However, there are no extreme
values in the data.
Example 3.3.2
Consider again the pizza data described in Appendix A.4. We use R to plot the
box plot for the delivery time via boxplot(time) (Fig. 3.5b). We see a
symmetric distribution with a median delivery time of about 35 min. Most of the
deliveries took between 30 and 40 min. The extreme values indicate that there
were some exceptionally short and long delivery times.
Example 3.4.1
Consider a village with 5 farms. Each farmer has a farm of a certain size. How
can we evaluate the land distribution? Do all farmers have a similar amount of
land or do one or two farmers have a big advantage because they have
considerably more space?
Table 3.1 Concentration of farmland: two different situations
Farmer (i)
(Area, in hectare)
1 20
2 20
3 20
4 20
5 20
1 0
2 0
3 0
4 0
5 100
Table 3.1 shows two different situations: in the table on the left, we see an
equal distribution of land, i.e. each farmer owns 20 hectares of farmland. This
means X is not concentrated, rather it is equally distributed. A statistical function
describing the concentration could return a value of zero in such a case. Consider
another extreme where one farmer owns all the farmland and the others do not
own anything, as shown on the right side of Table 3.1. This is an extreme
concentration of land: one person owns everything and thus, we say the
concentration is high. A statistical function describing the concentration could
return a value of one in such a case.
(3.32)
and
(3.33)
where is the cumulative total of observations up to the observation.
The idea is that describe the contribution of all values in comparison with
the sum of all values. Plotting against for all i shows how much the sum of
all , for all observations , contributes to the total sum. In other words, the
point says that % of observations contain % of the sum of
all less than or equal to i. Obviously, if all are identical, the Lorenz curve
will be a straight diagonal line, also known as the identity line or line of equality
. If the are of different sizes, then the Lorenz curve falls below the line of
equality. This is illustrated in the following example.
Example 3.4.2
Recall Example 3.4.1 where we looked at the distribution of farmland among 5
farmers. On the upper panel of Table 3.1, we observed an equal distribution of
land among the farmers: , , , , and . We obtain
and . This yields a
Lorenz curve as displayed on the left side of Fig. 3.5: there is no concentration.
We can interpret each point. For example, means that 40 % of
farmers own 40 % of the land.
The lower panel of Table 3.1 describes the situation with strong
concentration. For this table, we obtain and
. Therefore, for example, 80 % of farmers own 0 % of the
land which shows strong inequality. Most often we do not have such extreme
situations. In this case, the Lorenz curve is bent towards the lower right corner of
the plot, see the right side of Fig. 3.5.
We can plot the Lorenz curve in R using the Lc command in the library
ineq . The Lorenz curve for the left table of Example 3.4.1 is plotted in R as
follows:
Fig. 3.5 Lorenz curves for no concentration (left) and some concentration (right)
We can use the same approach as above to obtain the Lorenz curve when we
have grouped data. We simply describe the contributions for each class rather
than for each observation and approximate the values in each class by using its
mid-point. More formally we can write:
(3.34)
and
(3.35)
where
(3.37)
but the proof is omitted. The same formula can be used for grouped data
except that is used instead of v. Since
(3.38)
one may prefer to use the standardized Gini coefficient
(3.39)
which takes a maximum value of 1.
Example 3.4.3
We return to our farmland example. Suppose we have 7 farmers with farms of
different sizes:
Farmer 1 2 3 4 5 6 7
Farmland size 20 14 59 9 36 23 3
Using the ordered values, we can calculate and using (3.32) and (3.33):
1 3
2 9
3 14
4 20
5 23
6 36
7 59
Fig. 3.7 Lorenz curve for Example 3.4.3
The Lorenz curve is displayed in Fig. 3.7. Using this information, it is easy
to calculate the Gini coefficient:
Distance 12.5 29.9 14.8 18.7 7.6 16.2 16.5 27.4 12.1 17.5
Altitude 342 1245 502 555 398 670 796 912 238 466
(a) Calculate the arithmetic mean and median for both distance and altitude.
(b) Determine the first and third quartiles for both the distance and the altitude
variables. Discuss the shape of the distribution given the results of (a) and
(b).
(c) Calculate the interquartile range, absolute median deviation, and standard
deviation for both variables. What is your conclusion about the variability
of the data?
(d) One metre corresponds to approximately 3.28 ft. What is the average
altitude when measured in feet rather than in metres?
(e) Draw and interpret the box plot for both distance and altitude.
(f) Assume distance is measured as only short (5–15 km), moderate (15–
20 km), and long (20–30 km). Summarize the grouped data in a frequency
table. Calculate the weighted arithmetic mean under the assumption that
the raw data is not known. Determine the weighted median under the
assumption that the values within each class are equally distributed.
(g) What is the variance for the grouped data when the raw data is known, i.e.
when one has knowledge about the variance in each class? How does it
compare with the variance one obtains when the raw data is unknown?
(h) Use R to reproduce the results of (a), (b), (c), (e), and (f).
Exercise 3.2
A gambler notes down his wins and losses (in €) from playing 10 games of
roulette in a casino.
Round 1 2 3 4 5 6 7 8 9 10
Won/Lost 200 600 0
(a) Assume €90 and €294.7881. What is the result of round 10?
(c) A different gambler plays 33 rounds of roulette. His results are €12 and
€1000. Is it meaningful to compare the variability of results of the two
players by using the coefficient of variation? If yes, determine the
coefficients of variation; if no, why is a comparison not possible?
Exercise 3.3
A fashion boutique has summarized its daily sales of designer socks in different
groups: men’s socks, women’s socks, and children’s socks. Unfortunately, the
data for men’s socks was lost. Determine the missing values.
Men’s wear ? ? ?
Children’s wear 20 7.5
Total 100 15
Exercise 3.4
The number of members of a millionaires’ club were as follows:
(b) Based on the results of (a), how many members would one expect in 2018?
(c) The president of the club is interested in the number of members in 2025,
the year when his presidency ends. Would it make sense to predict the
number of members for 2025?
In 2015, the members invested €250 million on the stock market. 10
members contributed 16 of the investment sum, 8 members contributed €60
million, 8 members contributed €70 million, and another 4 members contributed
the remaining amount.
Exercise 3.5
Consider the monthly salaries Y (in Swiss francs) of a well-reputed software
company, as well as the length of service (in months, X), and gender (Z).
Figure 3.8 shows the QQ-plots for both Y and X given Z. Interpret both graphs.
Exercise 3.6
There is no built-in function in R to calculate the mode of a variable. Program
such a function yourself. Hint: type ?table and ?names to recall the
functionality of these functions. Combine them in an intelligent way.
Exercise 3.7
Consider a country in which 90 % of the wealth is owned by 20 % of the
population, the so-called upper class. For simplicity, let us assume that the
wealth is distributed equally within this class.
(b) Now assume a revolution takes place in the country and all members of the
upper class have to give away their wealth which is then distributed equally
across the remaining population. Draw the Lorenz curve for this scenario.
(c) What would the curve from (b) look like if the entire upper class left the
country?
Exercise 3.8
A bus route in the mountainous regions of Romania has a length of 418 km. The
manager of the bus company serving the route wants his buses to finish a trip
within 8 h. The bus travels the first 180 km with an average speed of 48 km/h,
the next 117 km with an average speed of 37 km/h, and the last section with an
average speed of 52 km/h.
(a) What is the average speed with which the bus travels?
Exercise 3.9
Four friends have a start-up company which sells vegan ice cream. Their initial
financial contributions are as follows:
Person 1 2 3 4
Contribution (in €) 800 10300 4700 2220
(c) Does change if each of the friends contributes only half the amount of
money? If yes, how much? If no, why not?
(d) Use R to draw the above Lorenz curve and to calculate the Gini coefficient.
Exercise 3.10
Recall the pizza delivery data which is described in Appendix A.4. Use R to read
in and analyse the data.
(a) Calculate the mean, median, minimum, maximum, first quartile, and third
quartile for all quantitative variables.
(b) Determine and interpret the 99 % quantile for delivery time and
temperature.
(c) Write a function which calculates the absolute mean deviation. Use the
function to calculate the absolute mean deviation of temperature.
(d) Scale the delivery time and calculate the mean and variance for this
variable.
(e) Draw a box plot for delivery time and temperature. The box plots should
not highlight extreme values.
(f) Use the cut command to create a new variable which summarizes delivery
time in steps of 10 min. Calculate the arithmetic mean of this variable.
Christian Heumann
Email: Christian.heumann@stat.uni-muenchen.de
Example 4.1.1
An airline conducts a customer satisfaction survey. The survey includes
questions about travel class and satisfaction levels with respect to different
categories such as seat comfort, in-flight service, meals, safety, and other
indicators. Consider the information on X, denoting the travel class (Economy =
“E”, Business = “B”, First = “F”), and “Y”, denoting the overall satisfaction with
the flight on a scale from 1 to 4 as 1 (poor), 2 (fair), 3 (good), and 4 (very good).
A possible response from 12 customers may look as follows:
Passenger number
i 1 2 3 4 5 6 7 8 9 10 11 12
Travel class E E E B E B F E E B E B
Satisfaction 2 4 1 3 1 2 4 3 2 4 3 3
Total (columns) n
(4.1)
Remark 4.1.1
Note that it is also possible to use the relative frequencies instead of
the absolute frequencies in Table 4.2, see Example 4.1.2.
Definition 4.1.1
Using the notations of Table 4.2, we define the following:
Note that for a bivariate joint frequency distribution, there will only be two
marginal (or relative) frequency distributions but possibly more than two
conditional (or relative) frequency distributions.
Table 4.3 Contingency table for travel class and satisfaction
Example 4.1.2
Recall the setup of Example 4.1.1. We now collect and evaluate the responses of
100 customers (instead of 12 passengers as in Example 4.1.1) regarding their
choice of the travel class and their overall satisfaction with the flight quality.
The data is provided in Table 4.3 where each of the cell entries illustrates
how many out of 100 passengers answered and : for example, the first entry
“10” indicates that 10 passengers were flying in economy class and described
the overall service quality as poor.
The marginal frequency distributions are displayed in the last column and
last row, respectively. For example, the marginal distribution of X refers to
the frequency table of “travel class” (X) and tells us that 62 passengers were
flying in economy class, 25 in business class, and 13 in first class.
Similarly, the marginal distribution of “overall rating of flight quality” (Y)
tells us that 10 passengers rated the quality as poor, 36 as fair, 40 as good,
and 14 as very good.
The conditional frequency distributions give us an idea about the behaviour
of one variable when the other one is kept fixed. For example, the
conditional distribution of the “overall rating of flight quality” (Y) among
passengers who were flying in economy class ( ) gives
which means that approximately of the customers
in economy class are rating the quality as poor, of the
customers in economy class are rating the quality as fair,
of the customers in economy class are rating the quality as good and
of the customers in economy class are rating the quality as
very good. Similarly, which means that of the
customers in business class are rating the quality as good and so on.
The conditional frequency distribution of the “travel class” (X) of
passengers given the “overall rating of flight quality” (Y) is obtained by
. For example, gives which
means that of the passengers who rated the flight to be good
travelled in economy class, of the passengers who rated
the flight to be good travelled in business class and of
the passengers who rated the flight to be good travelled in first class.
In total, we have 100 customers and hence
Business 0
First 0 0
Total (columns) 1
Example 4.1.3
Consider Example 4.1.2. There are 62 passengers flying in the economy class.
From these 62 passengers, 10 rated the quality of the flight as poor, 33 as fair, 15
as good, and 4 as very good. This means for , we can either
place 4 bars next to each other, as in Fig. 4.1a, or we can stack them on top of
each other, as in Fig. 4.1b. The same can be done for the other categories of X,
see Fig. 4.1. Stacked and stratified bar charts are prepared in R by calling the
library lattice and using the function bar chart. In detail, one needs to
specify:
Fig. 4.1 Bar charts for travel class and rating of satisfaction
Remark 4.1.2
There are several other options in R to specify stratified bar charts. We refer the
interested reader to Exercise 2.6 to explore how the R package ggplot2 can be
used to make such graphics. Sometimes it can also be useful to visualize the
difference of two variables and not stack or stratify the bars, see Exercise 2.1.
(4.3)
Note that the absolute frequencies are always integers but the expected absolute
frequencies may not always be integers.
Example 4.1.4
Recall Example 4.1.2. The expected absolute frequencies for the contingency
table can be calculated using (4.3). For example,
Table 4.4 lists both the observed absolute frequency and expected absolute
frequency (in brackets).
To calculate the expected absolute frequencies in R, we can access the
“expected” object returned from a -test applied to the respective contingency
table as follows:
A detailed motivation and explanation of this command is given in Sect. 10.
8.
Table 4.4 Observed and expected absolute frequencies for the airline survey
Overall rating of flight quality
Poor Fair Good Very good Total
Travel Economy 10 (6.2) 33 (22.32) 15 (24.8) 4 (8.68) 62
class Business 0 (2.5) 3 (9.0) 20 (10.0) 2 (3.5) 25
First 0 (1.3) 0 (4.68) 5 (5.2) 8 (1.82) 13
Total 10 36 40 14 100
Y
Total (row)
a b
X c d
Total (column) n
(4.4)
or equivalently if
(4.5)
Note that some other forms of the conditions (4.4)–(4.5) can also be derived
in terms of a, b, c, and d.
Example 4.2.1
Suppose a vaccination against flu (influenza) is given to 200 persons. Some of
the persons may get affected by flu despite the vaccination. The data is
summarized in Table 4.6. Using the notations of Table 4.5, we have
, and thus, which is
less than . Hence, being affected by flu is not independent of the
vaccination, i.e. whether one is vaccinated or not has an influence on getting
affected by flu. In the vaccinated group, only 10 of 100 persons are affected by
flu while in the group not vaccinated 60 of 100 persons are affected. Another
interpretation is that if independence holds, then we would expect 65 persons to
be not affected by flu in the vaccinated group but we observe 90 persons. This
shows that vaccination has a protective effect.
Persons
Not affected Affected Total (row)
Vaccinated 90 10 100
Vaccination Not vaccinated 40 60 100
Total (column) 130 70 200
(4.7)
The idea behind the coefficient is that when the relationship between two
variables is stronger, then the deviations between observed and expected
frequencies are expected to be higher (because the expected frequencies are
calculated assuming independence) and this indicates a stronger relationship
between the two variables. If observed and expected frequencies are identical or
similar, then this is an indication that the association between the two variables is
weak and the variables may even be independent. The statistic for a
contingency table sums up all the differences between the observed and expected
frequencies, squares them, and scales them with respect to the expected
frequencies. The squaring of the difference makes the statistic independent of the
positive and negative signs of the difference between observed and expected
frequencies. The range of values for is
(4.8)
Note that is the minimum function and simply returns the smaller of the
two numbers k and l. For example, returns the value 3. Consequently
the values of obtained from (4.6) can be compared with the range from (4.8).
A value of close to zero indicates a weak association and a value of close
to indicates a strong association between the two variables. Note
that the range of depends on n, k and l, i.e. the sample size and the dimension
of the contingency table.
The statistic is a symmetric measure in the sense that its value does not
depend on which variable is defined as X and which as Y.
Example 4.2.2
Consider Examples 4.1.2 and 4.1.4. Using the values from Table 4.4, we can
calculate the statistic as
The maximum possible value for the statistic is . Thus,
indicates a moderate association between “travel class” and “overall
rating of flight quality” of the passengers. In R, we obtain this result as follows:
(4.9)
The closer the value of V gets to 1, the stronger the association between the two
variables.
Example 4.2.3
Consider Example 4.2.2. The obtained statistic is 57.95064. To obtain
Cramer’s V, we just need to calculate
(4.10)
This indicates a moderate association between “travel class” and “overall rating
of flight quality” because 0.54 lies in the middle of 0 and 1. In R, there are two
options to calculate V: (i) to calculate the statistic and then adjust it as in
(4.9), (ii) to use the functions assocstats and xtabs contained in the
package vcd as follows:
4.2.3 Contingency Coefficient C
Another option to standardize is given by a corrected version of Pearson’s
contingency coefficient:
(4.11)
with
(4.12)
It always lies between 0 and 1. The closer the value of C is to 1, the stronger the
association.
Example 4.2.4
We know from Example 4.2.2 that the statistic for travel class and satisfaction
level is 57.95064. To calculate , we need the following calculations:
(4.13)
The odds ratio is defined as the ratio of these relative risks from (4.13) as
(4.14)
Alternatively, the odds ratio can be defined as the ratio of the chances for
“disease”, a / b (number of smokers with the disease divided by the number of
non-smokers with the disease), and no disease, c / d (number of smokers with no
disease divided by the number of non-smokers with no disease).
The relative risks compare proportions, while the odds ratio compares odds.
Example 4.2.5
A classical example refers to the possible association of smoking with a
particular disease. Consider the following data on 240 individuals:
(4.15)
Thus, the proportion of individuals with the disease is 1.69 times higher
among smokers when compared with non-smokers. Similarly, the proportion of
healthy individuals is 0.61 times smaller among smokers when compared with
non-smokers.
The relative risks are calculated to compare the proportion of sick or healthy
patients between smokers and non-smokers. Using these two relative risks, the
odds ratio is obtained as
We can interpret this outcome as follows: (i) the chances of smoking are 2.76
times higher for individuals with the disease compared with healthy individuals
(follows from definition (4.14)). We can also say that (ii) the chances of having
the particular disease is 2.76 times higher for smokers compared with non-
smokers. If we interchange either one of the “Yes” and “No” columns or the
“Yes” and “No” rows, we obtain , giving us further
interpretations: (iii) the chances of smoking are 0.36 times lower for individuals
without disease compared with individuals with the disease, and (iv) the chance
of having the particular disease is 0.36 times lower for non-smokers compared
with smokers. Note that all four interpretations are correct and one needs to
choose the right interpretation in the light of the experimental situation and the
question of interest.
Example 4.3.1
To explore the possible relationship between the overall number of tweets with
the number of followers on Twitter, we take a sample of 10 prime ministers and
heads of state in different countries as of June 2014 and obtain the following
data:
(4.16)
with
(4.17)
and
(4.18)
Karl Pearson (1857–1936) presented the first rigorous treatment of correlation
and acknowledged Auguste Bravais (1811–1863) for ascertaining the initial
mathematical formulae for correlation. This is why the correlation coefficient is
also known as the Bravais–Pearson correlation coefficient.
The correlation coefficient is independent of the units of measurement of X
and Y. For example, if someone measures the height and weight in metres and
kilograms respectively and another person measures them in centimetres and
grams, respectively, then the correlation coefficient between the two sets of data
will be the same. The correlation coefficient is symmetric, i.e. .
The limits of r are . If all the points in a scatter plot lie exactly on a
straight line, then the linear relationship between X and Y is perfect and ,
see also Exercise 4.7. If the relationship between X and Y is (i) perfectly linear
and increasing, then and (ii) perfectly linear and decreasing, then .
The signs of r thus determine the direction of the association. If r is close to
zero, then it indicates that the variables are independent or the relationship is not
linear. Note that if the relationship between X and Y is nonlinear, then the degree
of linear relationship may be low and r is then close to zero even if the variables
are clearly not independent. Note that and .
Example 4.3.2
Look again at the scatter plots in Figs. 4.2 and 4.3. We observe strong positive
linear correlation in Fig. 4.2a ( ), strong negative linear correlation in
Fig. 4.2b ( ), moderate positive linear correlation in Fig. 4.2c ( ),
moderate negative linear association in Fig. 4.2d ( ), no visible
correlation in Fig. 4.3a ( ), and strong nonlinear (but not so strong linear)
correlation in Fig. 4.3b ( ).
Example 4.3.3
In a decathlon competition, a group of athletes are competing with each other in
10 different track and field events. Suppose we are interested in how the results
of the 100-m race relate to the results of the long jump competition. The
correlation coefficient for the 100-m race (X, in seconds) and the long jump
event (Y, in metres) for 5 athletes participating in the 2004 Olympic Games (see
also Appendix A.4) are listed in Table 4.7.
To calculate the correlation coefficient, we need the following summary
statistics:
The correlation coefficient therefore is
(4.19)
The values of R lie between and and measure the degree of correlation
between the ranks of X and Y. Note that it does not matter whether we choose an
ascending or descending order of the ranks, the value of R remains the same.
When all the observations are assigned exactly the same ranks, then and
when all the observations are assigned exactly the opposite ranks, then .
Example 4.3.4
Look again at the scatter plots in Figs. 4.2 and 4.3. We observe strong positive
correlation in Fig. 4.2a ( ), strong negative correlation in Fig. 4.2b (
), moderate positive correlation in Fig. 4.2c ( ), moderate
negative association in Fig. 4.2d ( ), no visible correlation in Fig. 4.3a (
), and strong nonlinear correlation in Fig. 4.3b ( ).
Example 4.3.5
Let us follow Example 4.3.3 a bit further and calculate Spearman’s rank
correlation coefficient for the first five observations of the decathlon data. Again
we list the results of the 100-m race (X) and the results of the long jump
competition (Y). In addition, we assign ranks to both X and Y. For example, the
shortest time receives rank 1, whereas the longest time receives rank 5.
Similarly, the shortest long jump result receives rank 1, the longest long jump
result receives rank 5.
If two or more observations take the same values for (or ), then there is a tie.
In such situations, the respective ranks can simply be averaged, though more
complicated solutions also exist (one of which is implemented in the R function
cor). For example, if in Example 4.3.5 Bryan Clay’s was 10.50 s instead of
10.44 s, then both Bryan Clay and Dmitriy Karpov had the same time. Instead of
assigning the ranks 1 and 2 to them, we assign the ranks 1.5 to each of them.
Table 4.8 Payment options and timeliness survey with 100 participating customers
Timeliness
Unsatisfied Satisfied Very satisfied Total
(1) (2) (3)
Payment options Not enough (1) 7 11 26 44
Enough (2) 10 15 31 56
Total 17 26 57 100
The differences between the correlation coefficient and the rank correlation
coefficient are manifold: firstly, Pearson’s correlation coefficient can be used for
continuous variables only, but not for nominal or ordinal variables. The rank
correlation coefficient can be used for either two continuous or two ordinal
variables or a combination of an ordinal and a continuous variable, but not for
two nominal variables. Moreover, the rank correlation coefficient responds to
any type of relationship whereas Pearson’s correlation measures the degree of a
linear relationship only—see also Fig. 4.3b. Another difference between the two
correlation coefficients is that Pearson uses the entire information contained in
the continuous data in contrast to the rank correlation coefficient which uses
only ordinal information contained in the ordered data.
4.3.4 Measures Using Discordant and Concordant Pairs
Another concept which uses ranks to measure the association between ordinal
variables is based on concordant and discordant observation pairs. It is best
illustrated by means of an example.
Example 4.3.6
Suppose an online book store conducts a survey on their customer’s satisfaction
with respect to both the timeliness of deliveries (X) and payment options (Y). Let
us consider the following contingency table with a summary of the
responses of 100 customers. We assume that the categories for both variables can
be ordered and ranks can be assigned to different categories, see the numbers in
brackets in Table 4.8. There are 100 observation pairs which summarize
the response of the customers with respect to both X and Y. For example, there
are 18 customers who were unsatisfied with the timeliness of the deliveries and
complained that there are not enough payment options. If we compare two
responses and , it might be possible that one customer is more
happy (or more unhappy) than the other customer with respect to both X and Y or
that one customer is more happy with respect to X but more unhappy with
respect to Y (or vice versa). If the former is the case, then this is a concordant
observation pair; if the latter is true, then it is a discordant pair. For instance, a
customer who replied “enough” and “satisfied” is more happy than a customer
who replied “not enough” and “unsatisfied” because he is more happy with
respect to both X and Y.
In general, a pair is
concordant if and (or and ),
discordant if and (or and ),
tied if (or ).
Obviously, if we have only concordant observations, then there is a strong
positive association because a higher value of X (in terms of the ranking) implies
a higher value of Y. However, if we have only discordant observations, then
there is a clear negative association. The measures which are introduced below
simply put the number of concordant and discordant pairs into relation. This idea
is reflected in Goodman and Kruskal’s which is defined as
(4.20)
where
(4.21)
Both measures are standardized to lie between and 1, where larger values
indicate a stronger association and the sign indicates the direction of the
association.
Example 4.3.7
Consider Example 4.3.6. A customer who replied “enough” and “satisfied” is
more happy than a customer who replied “not enough” and “unsatisfied”
because the observation pairs, using ranks, are (2, 2) and (1, 1) and therefore
and . There are such pairs. Similarly those who said “not
enough” and “unsatisfied” are less happy than those who said “enough” and
“very satisfied” ( pairs). Table 4.5 summarizes the comparisons in detail.
Table 4.5a shows that is concordant to
and and tied to
, , and
. Thus for these comparisons, we have 0
discordant pairs, concordant pairs and tied
pairs. Table 4.5b–f show how the task can be completed. While tiresome,
systematically working through the table (and making sure to not count pairs
more than once) yields
As a visual rule of thumb, working from the top left to the bottom right
yields the concordant pairs; and working from the bottom left to the top right
yields the discordant pairs. It follows that
which indicates no clear relationship between the two variables. A similar result
is obtained using which is . This rather lengthy task
can be made much quicker by using the ord.gamma and ord.tau commands
from the R library ryouready :
Fig. 4.5 Scheme to visualize concordant (c), discordant (d), and tied (t) pairs in a contingency table
Example 4.4.1
Consider again our pizza delivery example (Appendix A.4). If we are interested
in the pizza delivery times by branch, we may simply plot the box plots and
ECDF’s of delivery time by branch. Figure 4.6 shows that the shortest delivery
times can be observed in the branch in the East. Producing these graphs in R is
straightforward: The boxplot command can be used for two variables by
separating them with the sign. For the ECDF, we have to produce a plot for
each branch and overlay them with the “add=TRUE” option.
Fig. 4.6 Distribution of pizza delivery time stratified by branch
Café i
1 3 6
2 8 7
3 7 10
4 9 8
5 5 4
(c) Suppose the coffee can only be rated as either good (>5) or bad ( ). Do
the chances of a good rating differ between the two journalists?
Exercise 4.2
A total of 150 customers of a petrol station are asked about their satisfaction with
their car and motorbike insurance. The results are summarized below:
(b) Combine the categories “car” and “car (diesel engine)” and produce the
corresponding table. Calculate as efficiently as possible and give a
meaningful interpretation of the odds ratio.
Exercise 4.3
There has been a big debate about the usefulness of speed limits on public roads.
Consider the following table which lists the speed limits for country roads (in
miles/h) and traffic deaths (per 100 million km) for different countries in 1986
when the debate was particularly serious:
(c) What are the effects on the correlation coefficients if the speed limit is
given in km/h rather than miles/h (1 mile/h 1.61 km/h)?
(d) Consider one more observation: the speed limit for England was 70 miles/h
and the death rate was 3.1.
Exercise 4.4
The famous passenger liner Titanic hit an iceberg in 1912 and sank. A total of
337 passengers travelled in first class, 285 in second class, and 721 in third class.
In addition, there were 885 staff members on board. Not all passengers could be
rescued. Only the following were rescued: 135 from the first class, 160 from the
second class, 541 from the third class and 674 staff.
(a) Determine and interpret the contingency table for the variables “travel
class” and “rescue status”.
(c) What would the contingency table from (a) look like under the
independence assumption? Calculate Cramer’s V statistic. Is there any
association between travel class and rescue status?
(d) Combine the categories “first class” and “second class” as well as “third
class” and “staff”. Create a contingency table based on these new
categories. Determine and interpret Cramer’s V, the odds ratio, and relative
risks of your choice.
(e) Given the results from (a) to (d), what are your conclusions?
Fig. 4.7 Temperature and hotel occupancy for the different cities
Exercise 4.5
To study the association of the monthly average temperature (in , X) and hotel
occupation (in , Y), we consider data from three cities: Polenca (Mallorca,
Spain) as a summer holiday destination, Davos (Switzerland) as a winter skiing
destination, and Basel (Switzerland) as a business destination.
(b) Interpret the scatter plot in Fig. 4.7 which visualizes temperature and hotel
occupancy for Davos (D), Polenca (P), and Basel (B).
(c) Use R to calculate the correlation coefficient separately for each city.
Interpret the results and discuss the use of the correlation coefficient if
more than two variables are available.
Exercise 4.6
Consider a neighbourhood survey on the use of a local park. Respondents were
asked whether the park may be used for summer music concerts and whether
dog owners should put their dogs on a lead. The results are summarized in the
following contingency table:
(b) Now ignore the ordinal structure of the data and calculate Cramer’s V.
(c) Create the contingency table which is obtained when the categories “no
opinion” and “agree” are combined.
(d) What is the relative risk of disagreement with summer concerts depending
on the opinion about using leads?
(e) Calculate the odds ratio and offer two interpretations of it.
(g) What is your final interpretation and what may be the best measure to use
in this example?
Exercise 4.7
Consider n observations for which , , holds. Show that .
Exercise 4.8
Make yourself familiar with the Olympic decathlon data described in
Appendix A.4. Read in and attach the data in R.
(c) Apply the cor command to the whole data and interpret the output.
(d) Omit the two rows which contain missing data and interpret the output
again.
Exercise 4.9
We are interested in the pizza delivery data which is described in Appendix A.4.
(a) Read in the data and create two new binary variables which describe
whether a pizza was hot ( C) and the delivery time was short (
min). Create a contingency table for the two new variables.
(b) Calculate and interpret the odds ratio for the contingency table from (a).
(c) Use Cramer’s V, Stuart’s , Goodman and Kruskal’s , and a stacked bar
chart to explore the association between the categorical time and
temperature variables.
(d) Draw a scatter plot for the continuous time and temperature variables.
Determine both the Bravais–Pearson and Spearman correlation
coefficients.
5. Combinatorics
Christian Heumann1 , Michael Schomaker2 and Shalabh3
(1) Department of Statistics, Ludwig-Maximilians-Universität München,
München, Germany
(2) Centre for Infectious Disease Epidemiology and Research, University of
Cape Town, Cape Town, South Africa
(3) Department of Mathematics and Statistics, Indian Institute of Technology
Kanpur, Kanpur, India
Christian Heumann
Email: Christian.heumann@stat.uni-muenchen.de
5.1 Introduction
Combinatorics is a special branch of mathematics. It has many applications not
only in several interesting fields such as enumerative combinatorics (the
classical application), but also in other fields, for example in graph theory and
optimization.
First, we try to motivate and understand the role of combinatorics in
statistics. Consider a simple example in which someone goes to a cafe. The
person would like a hot beverage and a cake. Assume that one can choose
among three different beverages, for example cappuccino, hot chocolate, and
green tea, and three different cakes, let us say carrot cake, chocolate cake, and
lemon tart. The person may consider different beverage and cake combinations
when placing the order, for example carrot cake and cappuccino, carrot cake and
tea, and hot chocolate and lemon tart. From a statistical perspective, the
customer is evaluating the possible combinations before making a decision.
Depending on their preferences, the order will be placed by choosing one of the
combinations.
In this example, it is easy to calculate the number of possible combinations.
There are three different beverages and three different cakes to choose from,
leading to nine different ( ) beverage and cake combinations. However,
suppose there is a choice of 15 hot beverages and 8 different cakes. How many
orders can be made? (Answer: ) What if the person decides to order two
cakes, how will it affect the number of possible combinations of choices? It will
be a tedious task to count all the possibilities. So we need a systematic approach
to count such possible combinations. Combinatorics deals with the counting of
different possibilities in a systematic approach.
People often use the urn model to understand the system in the counting
process. The urn model deals with the drawing of balls from an urn. The balls in
the urn represent the units of a population, or the features of a population. The
balls may vary in colour or size to represent specific properties of a unit or
feature. We illustrate this concept in more detail in Fig. 5.1.
Fig. 5.1 a Representation of the urn model. Drawing from the urn model b with replacement and c without
replacement. Compositions of three drawn balls: d all balls are distinguishable and e some balls are not
distinguishable
Suppose there are 5 balls of three different colours—two black, one grey, and
two white (see Fig. 5.1a). This can be generalized to a situation in which there
are n balls in the urn and we want to draw m balls. Suppose we want to know
how many different possibilities exist to draw m out of n balls (thus
determining the number of distinguishable combinations).
To deal with such a question, we first need to decide whether a ball will be
put back into the urn after it is drawn or not. Figure 5.1b illustrates that a grey
ball is drawn from the urn and then placed back (illustrated by the two-headed
arrow). We say the ball is drawn with replacement. Figure 5.1c illustrates a
different situation in which the grey ball is drawn from the urn and is not placed
back into the urn (illustrated by the one-headed arrow). We say the ball is drawn
without replacement.
Further, we may be interested in knowing the
total number of ways in which the chosen set of balls can be arranged in a
distinguishable order (which we will define as permutations later in this
chapter).
To answer the question how many permutations exist, we first need to decide
whether all the chosen balls are distinguishable from each other or not. For
example, in Fig. 5.1d, the three chosen balls have different colours; therefore,
they are distinguishable. There are many options on how they can be arranged.
In contrast, some of the chosen balls in Fig. 5.1e are the same colour, they are
therefore not distinguishable. Consequently, the number of combinations is
much more limited. The concept of balls and urns just represents the features of
observations from a sample. We illustrate this in more detail in the following
example.
Example 5.1.1
Say a father promises his daughter three scoops of ice cream if she cleans up her
room. For simplicity, let us assume the daughter has a choice of four flavours:
chocolate, banana, cherry, and lemon. How many different choices does the
daughter have? If each scoop has to be a different flavour she obviously has
much less choice than if the scoops can have the same flavour. In the urn model,
this is represented by the concept of “with/without replacement”. The urn
contains 4 balls of 4 different colours which represent the ice cream flavours.
For each of the three scoops, a ball is drawn to determine the flavour. If we draw
with replacement, each flavour can be potentially chosen multiple times;
however, if we draw without replacement each flavour can be chosen only once.
Then, the number of possible combinations is easy to calculate: it is 4, i.e.
(chocolate, banana, and cherry); (chocolate, banana, and lemon); (chocolate,
cherry, and lemon); and (banana, cherry, and lemon). But what if we have more
choices? Or if we can draw flavours multiple times? We then need calculation
rules which help us counting the number of options.
Now, let us assume that the daughter picked the flavours (chocolate [C],
banana [B], and lemon [L]). Like many other children, she prefers to eat her
most favourite flavour (chocolate) last, and her least favourite flavour (cherry)
first. Therefore, the order in which the scoops are placed on top of the cone are
important! In how many different ways can the scoops be placed on top of the
cone? This relates to the question of the number of distinguishable permutations.
The answer is 6: (C,B,L)–(C,L,B)–(B,L,C)–(B,C,L)–(L,B,C)–(L,C,B). But what
if the daughter did pick a flavour multiple times, e.g. (chocolate, chocolate,
lemon)? Since the two chocolate scoops are non-distinguishable, there are fewer
permutations: (chocolate, chocolate, and lemon)–(chocolate, lemon, and
chocolate)–(lemon, chocolate, and chocolate).
The bottom line of this example is that the number of combinations/options
is determined by (i) whether we draw with or without replacement (i.e. allow
flavours to be chosen more than once) and (ii) whether the arrangement in a
particular order (=permutation) is of any specific interest.
Consider the urn example again. Suppose three balls of different colours, black,
grey, and white, are drawn. Now there are two options: The first option is to take
into account the order in which the balls are drawn. In such a situation, two
possible sets of balls such as (black, grey, and white) and (white, black, and
grey) constitute two different sets. Such a set is called an ordered set. In the
second option, we do not take into account the order in which the balls are
drawn. In such a situation, the two possible sets of balls such as (black, grey, and
white) and (white, black, and grey) are the same sets and constitute an unordered
set of balls.
Definition 5.1.1
A group of elements is said to be ordered if the order in which these elements
are drawn is of relevance. Otherwise, it is called unordered .
Examples.
Ordered samples:
– The first three places in an Olympic 100 m race are determined by
the order in which the athletes arrive at the finishing line. If 8
athletes are competing with each other, the number of possible
results for the first three places is of interest. In the urn language, we
are taking draws without replacement (since every athlete can only
have one distinct place).
– In a raffle with two prizes, the first drawn raffle ticket gets the first
prize and the second raffle ticket gets the second prize.
– There exist various esoteric tarot card games which claim to foretell
someone’s fortune with respect to several aspects of life. The order
in which the cards are shown on the table is important for the
interpretation.
Unordered samples:
– The selected members for a national football team. The order in
which the selected names are announced is irrelevant.
– Out of 10 economists, 10 medical doctors, and 10 statisticians, an
advisory committee consisting of 4 economists, 3 medical doctors,
and 2 statisticians is elected.
– Fishing 20 fish from a lake.
– A bunch of 10 flowers made from 21 flowers of 4 different colours.
Definition 5.1.2
The factorial function n ! is defined as
(5.1)
Example 5.1.2
It follows from the definition of the factorial function that
5.2 Permutations
Definition 5.2.1
Consider a set of n elements. Each ordered composition of these n elements is
called a permutation.
We distinguish between two cases: If all the elements are distinguishable, then
we speak of permutation without replacement . However, if some or all of the
elements are not distinguishable, then we speak of permutation with replacement
. Please note that the meaning of “replacement” here is just a convention and
does not directly refer to the drawings, e.g. from the urn model considered in
Example 5.1.1.
Example 5.2.1
There were three candidate cities for hosting the 2020 Olympic Games: Tokyo
(T), Istanbul (I), and Madrid (M). Before the election, there were possible
outcomes, regarding the final rankings of the cities:
(5.3)
Example 5.2.2
Consider the data in Fig. 5.1e. There are two groups consisting of two black
balls and one white ball . So there are the following three
possible combinations to arrange the balls: (black, black, and white), (black,
white, and black), and (white, black, and black). This can be determined by
calculating
5.3 Combinations
Definition 5.3.1
The Binomial coefficient for any integers m and n with is denoted and
defined as
(5.4)
It is read as “n choose m” and can be calculated in R using the following
command:
(5.5)
We now answer the question of how many different possibilities exist to draw m
out of n elements, i.e. m out of n balls from an urn. It is necessary to distinguish
between the following four cases:
(4) Combinations with replacement and with consideration of the order of the
elements.
Example 5.3.1
Suppose a company elects a new board of directors. The board consists of 5
members and 15 people are eligible to be elected. How many combinations for
the board of directors exist? Since a person cannot be elected twice, we have a
situation where there is no replacement. The order is also of no importance:
either one is elected or not. We can thus apply (5.6) which yields
(5.7)
Example 5.3.2
Consider a horse race with 12 horses. A possible bet is to forecast the winner of
the race, the second horse of the race, and the third horse of the race. The total
number of different combinations for the horses in the first three places is
This result can be explained intuitively: for the first place, there is a choice of 12
different horses. For the second place, there is a choice of 11 different horses (12
horses minus the winner). For the third place, there is a choice of 10 different
horses (12 horses minus the first and second horses). The total number of
combinations is the product . This can be calculated in R as follows:
5.3.3 Combinations with Replacement and without
Consideration of the Order
The total number of different combinations with replacement and without
consideration of the order is
(5.8)
Note that these are the two representations which follow from the definition of
the binomial coefficient but typically only the first representation is used in
textbooks. We will motivate the second representation after Example 5.3.3.
Example 5.3.3
A farmer has 2 fields and aspires to cultivate one out of 4 different organic
products per field. Then, the total number of choices he has is
(5.9)
If 4 different organic products are denoted as a, b, c, and d, then the following
combinations are possible:
Please note that, for example, (a,b) is identical to (b,a) because the order in
which the products a and b are cultivated on the first or second field is not
important in this example.
Example 5.3.4
Consider a credit card with a four-digit personal identification number (PIN)
code. The total number of possible combinations for the PIN is
Note that every digit in the first, second, third, and fourth places can be
chosen out of ten digits from 0 to 9 .
Exercise 5.2
A language teacher is concerned about the vocabularies of his students. He thus
tests 5 students in each lecture. What are the total number of possible
combinations
Exercise 5.3
“Gobang” is a popular game in which two players set counters on a board with
381 knots. One needs to place 5 consecutive counters in a row to win the game.
There are also rules on how to remove counters from the other player. Consider a
match where 64 counters have already been placed on the board. How many
possible combinations exist to place 64 counters on the board?
Exercise 5.4
A shop offers a special tray of beer: “Munich’s favourites”. Customers are
allowed to fill the tray, which holds 20 bottles, with any combination of
Munich’s 6 most popular beers (from 6 different breweries).
(a) What are the number of possible combinations to fill the tray?
(b) A customer insists of having at least one beer from each brewery in his
tray. How many options does he have to fill the tray?
Exercise 5.5
The FIFA World Cup 2018 in Russia consists of 32 teams. How many
combinations for the top 3 teams exist when
(a) taking into account the order of these top 3 teams and
(b) without taking into account the order of these top 3 teams?
Exercise 5.6
An online book store assigns membership codes to each member. For
administrative reasons, these codes consist of four letters between “A” and “L”.
A special discount period increased the total number of members from 18, 200 to
20, 500. Are there enough combinations of codes left to be assigned for the new
membership codes?
Exercise 5.7
In the old scoring system of ice skating (valid until 2004), each member of a jury
of 9 people judged the performance of the skaters on a scale between 0 and 6. It
was a decimal scale and thus scores such as 5.1 and 5.2 were possible. Calculate
the number of possible score combinations from the jury.
Exercise 5.8
It is possible in Pascal’s triangle (Fig. 5.2, left) to view each entry as the sum of
the two entries directly above it. For example, the 3 on the fourth line from the
top is the sum of the 1 and 2 above the 3. Another interpretation refers to a
geometric representation of the binomial coefficient, (Fig. 5.2, right) with
Fig. 5.2 Excerpt from Pascal’s triangle (left) and its representation by means of binomial coefficients
(right)
(a) Show that each entry in the bold third diagonal line can be represented via
.
(b) Now show that the sum of two consecutive entries in the bold third
diagonal line always corresponds to quadratic numbers.
© Springer International Publishing Switzerland 2016
Christian Heumann, Michael Schomaker and Shalabh, Introduction to Statistics and Data Analysis ,
DOI 10.1007/978-3-319-46162-5_6
Christian Heumann
Email: Christian.heumann@stat.uni-muenchen.de
Let us first consider some simple examples to understand the need for
probability theory. Often one needs to make a decision whether to carry an
umbrella or not when leaving the house; a company might wonder whether to
introduce a new advertisement to possibly increase sales or to continue with their
current advertisement; or someone may want to choose a restaurant based on
where he can get his favourite dish. In all these situations, randomness is
involved. For example, the decision of whether to carry an umbrella or not is
based on the possibility or chance of rain. The sales of the company may
increase, decrease, or remain unchanged with a new advertisement. The
investment in a new advertising campaign may therefore only be useful if the
probability of its success is higher than that of the current advertisement.
Similarly, one may choose the restaurant where one is most confident of getting
the food of one’s choice. In all such cases, an event may be happening or not and
depending on its likelihood, actions are taken. The purpose of this chapter is to
learn how to calculate such likelihoods of events happening and not happening.
6.1 Basic Concepts and Set Theory
A simple (not rigorous) definition of a random experiment requires that the
experiment can be repeated any number of times under the same set of
conditions, and its outcome is known only after the completion of the
experiment. A simple and classical example of a random experiment is the
tossing of a coin or the rolling of a die. When tossing a coin, it is unknown what
the outcome will be, head or tail, until the coin is tossed. The experiment can be
repeated and different outcomes may be observed in each repetition. Similarly,
when rolling a die, it is unknown how many dots will appear on the upper
surface until the die is rolled. Again, the die can be rolled repeatedly and
different numbers of dots are obtained in each trial. A possible outcome of a
random experiment is called a simple event (or elementary event ) and denoted
by . The set of all possible outcomes, , is called the sample
space and is denoted as , i.e. . Subsets of are called
events and are denoted by capital letters such as A, B, C. The set of all simple
events that are contained in the event A is denoted by . The event refers to
the non-occurring of A and is called a composite or complementary event .
Also is an event. Since it contains all possible outcomes, we say that will
always occur and we call it a sure event or certain event . On the other hand, if
we consider the null set as an event, then this event can never occur and
we call it an impossible event . The sure event therefore is the set of all
elementary events, and the impossible event is the set with no elementary events.
The above concepts of “events” form the basis of a definition of
“probability”. Once we understand the concept of probability, we can develop a
framework to make conclusions about the population of interest, using a sample
of data.
Example 6.1.1
(Rolling a die) If a die is rolled once, then the possible outcomes are the number
of dots on the upper surface: . Therefore, the sample space is the set of
simple events “1”, “2”, “6” and . Any
subset of can be used to define an event. For example, an event A may be “an
even number of dots on the upper surface of the die”. There are three
possibilities that this event occurs: , or . If an odd number shows up,
then the composite event occurs instead of A. If an event is defined to observe
only one particular number, say “1”, then it is an elementary event. An
example of a sure event is “a number which is greater than or equal to 1”
because any number between 1 and 6 is greater than or equal to 1. An impossible
event is “the number is 7”.
Example 6.1.2
(Rolling two dice) Suppose we throw two dice simultaneously and an event is
defined as the “number of dots observed on the upper surface of both the dice”;
then, there are 36 simple events defined as (number of dots on first die, number
of dots on second die), i.e. . Therefore is
One can define different events and their corresponding sample spaces. For
example, if an event A is defined as “upper faces of both the dice contain the
same number of dots”, then the sample space is
. If another event B is defined as “the sum of numbers on the upper
faces is 6”, then the sample space is . A sure
event is “get either an even number or an odd number”; an impossible event
would be “the sum of the two dice is greater than 13”.
Example 6.1.3
Consider Example 6.1.1 where the sample space of rolling a die was determined
as with “1”, “2”, “6”.
If and B is the set of all odd numbers, then
and thus .
If is the set of even numbers and is the set of all
numbers which are divisible by 3, then is the
collection of simple events for which the number is either even or divisible
by 3 or both.
If is the set of odd numbers and is the set of the
numbers which are divisible by 3, then is the set of simple
events in which the numbers are odd and divisible by 3.
If is the set of odd numbers and is the set of the
numbers which are divisible by 3, then is the set of simple
events in which the numbers are odd but not divisible by 3.
If is the set of even numbers, then is the set of
odd numbers.
Remark 6.1.1
Some textbooks also use the following notations:
We can use these definitions and notations to derive the following properties of a
particular event A:
Definition 6.1.1
Two events A and B are disjoint if holds, i.e. if both events cannot
occur simultaneously.
Example 6.1.4
The events A and are disjoint events.
Definition 6.1.2
The events are said to be mutually or pairwise disjoint, if
whenever .
Example 6.1.5
Recall Example 6.1.1. If and are the sets of odd
and even numbers, respectively, then the events A and B are disjoint.
Definition 6.1.3
The events form a complete decomposition of if and only if
and
Example 6.1.6
Consider Example 6.1.1. The elementary events
form a complete decomposition. Other complete decompositions are, e.g.
Example 6.2.1
Consider roulette, a game frequently played in casinos. The roulette table
consists of 37 numbers from 0 to 36. Out of these 37 numbers, 18 numbers are
red, 18 are black and one (zero) is green. Players can place their bets on either a
single number or a range of numbers, the colours red or black, whether the
number is odd or even, among many other choices. A casino employee spins a
wheel (containing pockets representing the 37 numbers) in one direction and
then spins a ball over the wheel in the opposite direction. The wheel and ball
gradually slow down and the ball finally settles in a pocket. The pocket number
in which the ball sits down when the wheel stops is the winning number.
Consider three possible outcomes : “red”, :“black”, and : “green (zero)”.
Suppose the roulette ball is spun times. All the outcomes are counted and
recorded as follows: occurs 240 times, occurs 250 times and occurs 10
times. Then, the absolute frequencies are given by ,
, and . We therefore get the relative frequencies as
where n(A) denotes the number of times an event A occurs out of n times.
Example 6.2.2
Suppose a fair coin is tossed times and we observe the number of heads
times and number of tails times. The meaning of a fair coin
in this case is that the probabilities of head and tail are equal (i.e. 0.5). Then, the
relative frequencies in the experiment are and
. When the coin is tossed a large number of times and n tends
to infinity, then both and will have a limiting value 0.5 which is
simply the probability of getting a head or tail in tossing a fair coin.
Example 6.2.3
In Example 6.2.1, the relative frequency of tends to 18 / 37 as n
tends to infinity because 18 out of 37 numbers are red.
Definition 6.2.1
The proportion
(6.2)
is called the Laplace probability , where |A| is the cardinal number of A, i.e. the
number of simple events contained in the set A, and is the cardinal number of
, i.e. the number of simple events contained in the set .
The cardinal numbers |A| and are often calculated using the combinatoric
rules introduced in Chap. 5.
Example 6.2.4
(Example 6.1.2 continued) The sample space contains 36 simple events. All of
these simple events have equal probability 1 / 36. To calculate the probability of
the event A that the sum of the dots on the two dice is at least 4 and at most 6, we
count the favourable simple events which fulfil this condition. The simple events
are (1, 3), (2, 2), (3, 1) (sum is 4), (1, 4), (2, 3), (4, 1), (3, 2) (sum is 5) and (1, 5),
(2, 4), (3, 3), (4, 2), (5, 1) (sum is 6). In total, there are favourable
simple events, i.e.
The probability of the event A is therefore .
Axiom 1
Every random event A has a probability in the (closed) interval [0, 1], i.e.
Axiom 2
The sure event has probability 1, i.e.
Axiom 3
If and are disjoint events, then
holds.
Remark
Axiom 3 also holds for three or more disjoint events and is called the theorem of
additivity of disjoint events . For example, if , , and are disjoint events,
then
Example 6.3.1
Suppose the two events in tossing a coin are : “appearance of head” and :
“appearance of tail” which are disjoint. The event : “appearance of head
or tail” has the probability
Example 6.3.2
Suppose an event is defined as the number of points observed on the upper
surface of a die when rolling it. There are six events, i.e. the natural numbers 1,
2, 3, 4, 5, 6. These events are disjoint and they have equal probability of
occurring: . The probability of getting an even
number is then
Corollary 1
The probability of the complementary event of A, (i.e. ) is
(6.3)
Example 6.3.3
Suppose a box of 30 chocolates contains chocolates of 6 different flavours with 5
chocolates of each flavour. Suppose an event A is defined as
. The probability of finding a marzipan chocolate (without
looking into the box) is . Then, the probability of the
complementary event , i.e. the probability of not finding a marzipan chocolate
is therefore
Corollary 2
The probability of occurrence of an impossible event is zero:
Corollary 3
Let and be not necessarily disjoint events. The probability of occurrence
of or is
(6.4)
Example 6.3.4
There are 10 actors acting in a play. Two actors, one of whom is male, are
portraying evil characters. In total, there are 6 female actors. Let an event A
describe whether the actor is male and another event B describe whether the
character is evil. Suppose we want to know the probability of a randomly chosen
actor being male or evil. We can then calculate
Corollary 4
If then .
Proof
We use the representation where A and are the disjoint
events. Then using Axiom 3 and Axiom 1, we get
(1)
(2)
(5)
(6)
(7) , if
Assume that we have prior information that A has already occurred. Now we
want to find out how the probability of B is to be calculated. Since A has already
occurred, we know that the sample space is reduced by the number of simple
events which are contained in A. There are such simple events. Thus, the total
sample space is reduced by the sample space of A. Therefore, the simple
events in are those simple events which are realized when B is realized.
The Laplace probability for B under the prior information on A, or under the
condition that A is known, is therefore
(6.5)
This can be generalized to the case when the probabilities for simple events
are unequal.
Definition 6.4.1
Let . Then the conditional probability of event B occurring, given that
event A has already occurred, is
(6.6)
The roles of A and B can be interchanged to define P(A|B) as follows. Let
. The conditional probability of A given B is
(6.7)
Theorem 6.4.1
(Multiplication Theorem of Probability) For two arbitrary events A and B, the
following holds:
(6.8)
This theorem follows directly from the two definitions (6.6) and (6.7) (but does
not require that and ).
Theorem 6.4.2
(Law of Total Probability) Assume that are events such that
and , i.e. form a
complete decomposition of in pairwise disjoint events, then the
probability of an event B can be calculated as
(6.9)
(6.10)
(6.11)
The probabilities are called prior probabilities, are sometimes
called model probabilities and are called posterior probabilities .
Example 6.4.1
Suppose someone rents movies from two different DVD stores. Sometimes it
happens that the DVD does not work because of scratches. We consider the
following events: : “the DVD is rented from store i”. Further let B
denote the event that the DVD is working without any problems. Assume we
know that and (note that ) and ,
and we are interested in the probability that a rented DVD works
fine. We can then apply the Law of Total Probability and get
We may also be interested in the probability that the movie was rented from
store 1 and is working which is
Now suppose we have a properly working DVD. What is the probability that it is
rented from store 1? This is obtained as follows:
Now assume we have a DVD which does not work, i.e. occurs. The
probability that a DVD is not working given that it is from store 1 is
. Similarly, for store 2. We can now calculate the
conditional probability that a DVD is from store 1 given that it is not working:
The result about used in the denominator can also be directly obtained by
using .
6.5 Independence
Intuitively, two events are independent if the occurrence or non-occurrence of
one event does not affect the occurrence or non-occurrence of the other event. In
other words, two events A and B are independent if the probability of occurrence
of B has no effect on the probability of occurrence of A. In such a situation, one
expects that
(6.12)
This yields:
(6.13)
This definition of independence can be extended to the case of more than two
events as follows:
Definition 6.5.2
The n events are stochastically mutually independent, if for any
subset of m events ( )
(6.15)
holds.
Example 6.5.1
Consider an urn with four balls. The following combinations of zeroes and ones
are printed on the balls: 110, 101, 011, 000. One ball is drawn from the urn.
Define the following events:
Since there are two favourable simple events for each of the events and
, we get
The probability that all the three events simultaneously occur is zero because
there is no ball with 111 printed on it. Therefore, , and are not
stochastically independent because
However,
6.7 Exercises
Exercise 6.1
(a) Suppose , , , .
Determine , , , , .
Exercise 6.2
A driving licence examination consists of two parts which are based on a
theoretical and a practical examination. Suppose 25 of people fail the
practical examination, 15 of people fail the theoretical examination, and 10
of people fail both the examinations. If a person is randomly chosen, then what
is the probability that this person
(b) only fails the practical examination, but not the theoretical examination?
Exercise 6.3
A new board game uses a twelve-sided die. Suppose the die is rolled once, what
is the probability of getting
Exercise 6.4
The Smiths are a family of six. They are celebrating Christmas and there are 12
gifts, two for each family member. The name tags for each family member have
been attached to the gifts. Unfortunately the name tags on the gifts are damaged
by water. Suppose each family member draws two gifts at random. What is the
probability that someone
(a) gets his/her two gifts, rather than getting the gifts for another family
member?
(b) gets none of his/her gifts, but rather gets the gifts for other family
members?
Exercise 6.5
A chef from a popular TV cookery show sometimes puts too much salt in his
pumpkin soup and the probability of this happening is 0.2. If he is in love (which
he is with probability 0.3), then the probability of using too much salt is 0.6.
(a) Create a contingency table for the probabilities of the two variables “in
love” and “too much salt”.
(b) Determine whether the two variables are stochastically independent or not.
Exercise 6.6
Dr. Obermeier asks his neighbour to take care of his basil plant while he is away
on leave. He assumes that his neighbour does not take care of the basil with a
probability of . The basil dies with probability when someone takes care of it
and with probability if no one takes care of it.
(a) Calculate the probability of the basil plant surviving after its owner’s leave.
(b) It turns out that the basil eventually dies. What is the probability that Dr.
Obermeier’s neighbour did not take care of the plant?
Exercise 6.7
A bank considers changing its credit card policy. Currently 5 % of credit card
owners are not able to pay their bills in any month, i.e. they never pay their bills.
Among those who are generally able to pay their bills, there is still a 20 %
probability that the bill is paid too late in a particular month.
(a) What is the probability that someone is not paying his bill in a particular
month?
(b) A credit card owner did not pay his bill in a particular month. What is the
probability that he never pays back the money?
(c) Should the bank consider blocking the credit card if a customer does not
pay his bill on time?
Exercise 6.8
There are epidemics which affect animals such as cows, pigs, and others.
Suppose 200 cows are tested to see whether they are infected with a virus or not.
Let event A describe whether a cow has been transported by a truck recently or
not and let B denote the event that a cow has been tested positive with a virus.
The data are summarized in the following table:
B Total
A 40 60 100
20 80 100
Total 60 140 200
(a) What is the probability that a cow is infected and has been transported by a
truck recently?
(b) What is the probability of having an infected cow given that it has been
transported by the truck?
Exercise 6.9
A football practice target is a portable wall with two holes (which are the target)
in it for training shots. Suppose there are two players A and B. The probabilities
of hitting the target by A and B are 0.4 and 0.5, respectively.
(a) What is the probability that at least one of the players succeeds with his
shot?
(b) What is the probability that exactly one of the players hits the target?
Source Toutenburg, H., Heumann, C., Induktive Statistik, 4th edition, 2007,
Springer, Heidelberg
© Springer International Publishing Switzerland 2016
Christian Heumann, Michael Schomaker and Shalabh, Introduction to Statistics and Data Analysis ,
DOI 10.1007/978-3-319-46162-5_7
7. Random Variables
Christian Heumann1 , Michael Schomaker2 and Shalabh3
(1) Department of Statistics, Ludwig-Maximilians-Universität München,
München, Germany
(2) Centre for Infectious Disease Epidemiology and Research, University of
Cape Town, Cape Town, South Africa
(3) Department of Mathematics and Statistics, Indian Institute of Technology
Kanpur, Kanpur, India
Christian Heumann
Email: Christian.heumann@stat.uni-muenchen.de
In the first part of the book we highlighted how to describe data. Now, we
discuss the concepts required to draw statistical conclusions from a sample of
data about a population of interest. For example, suppose we know the starting
salary of a sample of 100 students graduating in law. We can use this knowledge
to draw conclusions about the expected salary for the population of all students
graduating in law. Similarly, if a newly developed drug is given to a sample of
selected tuberculosis patients, then some patients may show improvement and
some patients may not, but we are interested in the consequences for the entire
population of patients. In the remainder of this chapter, we describe the
theoretical concepts required for making such conclusions. They form the basis
for statistical tests and inference which are introduced in Chaps. 9–11.
Definition 7.1.1
Let represent the sample space of a random experiment, and let be the set
of real numbers. A random variable is a function X which assigns to each
element one and only one number , i.e.
(7.1)
Example 7.1.1
The features of a die roll experiment, a roulette game, or the lifetime of a TV can
all be described by a random variable, see Table 7.1. The events involve
randomness, and if we have knowledge about the random process, we can assign
probabilities to each event, e.g. when rolling a die, the probability of
getting a “1” is and the probability of getting a “2” is
.
Roulette : red
: black
: green (zero)
As in Chap. 2, we can see that the CDF is useful in obtaining the probabilities
related to the occurrence of random events. Note that the empirical cumulative
distribution function (ECDF, Sect. 2.2) and the cumulative distribution function
are closely related and therefore have a similar definition and similar calculation
rules. However, in Chap. 2, we work with the cumulative distribution of
observed values in a particular sample whereas in this chapter, we deal with
random variables modelling the distribution of a general population.
The Definition 7.2.1 implies the following properties of the cumulative
distribution function:
F(x) is a monotonically non-decreasing function
(if , it follows that ),
(the lower limit of F is 0),
Definition 7.2.2
A random variable X is said to be continuous if there is a function f(x) such that
for all
(7.3)
holds. F(x) is the cumulative distribution function (CDF) of X, and f(x) is the
probability density function (PDF) of x and for all x that are
continuity points of f.
Theorem 7.2.1
For a function f(x) to be a probability density function (PDF) of X, it needs to
satisfy the following conditions:
(2) .
Theorem 7.2.2
Let X be a random variable with CDF F(x). If , where and are
known constants, .
Theorem 7.2.3
The probability of a continuous random variable taking a particular value is
zero:
(7.4)
The proof is provided in Appendix C.2.
Example 7.2.1
Consider the continuous random variable “waiting time for the train”. Suppose
that a train arrives every 20 min. Therefore, the waiting time of a particular
person is random and can be any time contained in the interval [0, 20]. We can
start describing the required probability density function as
is the probability density function describing the waiting time for the train. We
can now use Definition 7.2.2 to determine the cumulative distribution function:
We can obtain this probability from the graph of the CDF as well, see Fig. 7.1
where both the PDF and CDF of this example are illustrated.
Fig. 7.1 Probability density function (PDF) and cumulative distribution function (CDF) for waiting time in
Example 7.2.1
Defining a function, for example the CDF, is simple in R: One can use the
function command followed by specifying the variables the function
evaluates in round brackets (e.g. x) and the function itself in braces (e.g. x / 20).
Functions can be plotted using the curve command:
Alternatively, the plot command can be used to plot vectors against each
other; for example, after defining a function, we can define a sequence (
), evaluate this sequence via the specified function
(cdf(x)), and plot them against each other and connect the points from the
sequence with a line (plot(x,cdf(x),type=’l’)).
This example illustrates how the cumulative distribution function can be
used to obtain probabilities of interest. Most importantly, if we want to calculate
the probability that the random variable X takes values in the interval , we
simply have to look at the difference of the respective CDF values at and .
Figure 7.2a highlights that the interval probability corresponds to the difference
of the CDF values on the y-axis.
We can also use the probability density function to visualize .
We know from Theorem 7.2.1 that , and therefore, the area under
the PDF equals 1. Thus, we can interpret interval probabilities as the area under
the PDF between and . This is presented in Fig. 7.2b.
Fig. 7.2 Graphical representation of the probability a via the CDF and b via the PDF
Example 7.2.2
Consider the example of tossing of a coin where each trial results in either a
head (H) or a tail (T), each occurring with the same probability 0.5. When the
coin is tossed multiple times, we may observe sequences such as
H, T, H, H, T, H, H, T, and . The sample space is . Let the random
variable X denote the number of trials required to get the third head, then
for the given sequence above. Clearly, the space of X is the set . We
can see that X is a discrete random variable because its space is finite and can be
counted. We can also assign certain probabilities to each of these values, e.g.
and .
Definition 7.2.4
Let X be a discrete random variable which takes k different values. The
probability mass function (PMF) of X is given by
(7.5)
It is required that the probabilities satisfy the following conditions:
(1) ,
(2) .
Definition 7.2.5
Given (7.5), we can write the CDF of a discrete random variable as
(7.6)
where I is an indicator function defined as
(7.11)
(7.12)
(7.13)
(7.14)
Remark 7.2.1
The Eqs. (7.7)–(7.14) can also be used for continuous variables, but in this case,
= = 0 (see Theorem 7.2.3), and therefore, Eqs. (7.7)–(7.14) can
be modified accordingly.
Example 7.2.3
Consider the experiment of rolling a die. There are six possible outcomes. If we
define the random variable X as the number of dots observed on the upper
surface of the die, then the six possible outcomes can be described as
. The respective probabilities are
. The PMF and CDF are therefore defined as follows:
Both the CDF and the PDF are displayed in Fig. 7.3.
Fig. 7.3 Probability density function and cumulative distribution function for rolling a die in Example
7.2.3. “ ” relates to an included value and “ ” to an excluded value
7.3.1 Expectation
Definition 7.3.1
The expectation of a continuous random variable X, having the probability
density function f(x) with , is defined as
(7.15)
For a discrete random variable X, which takes the values with
respective probabilities , the expectation of X is defined as
(7.16)
Example 7.3.1
Consider again Example 7.2.1 where the waiting time for a train was described
by the following probability density function:
Example 7.3.2
Consider again the die roll experiment from Example 7.2.3. The probabilities for
the occurrence of any , are . The expectation can
thus be calculated as
7.3.2 Variance
The variance describes the variability of a random variable. It gives an idea
about the concentration or dispersion of values around the arithmetic mean of the
distribution.
Definition 7.3.2
The variance of a random variable X is defined as
(7.17)
The variance of a continuous random variable X is
(7.18)
where . Similarly, the variance of a discrete random variable X
is
(7.19)
where The variance is usually denoted by .
Definition 7.3.3
The positive square root of the variance is called the standard deviation.
Example 7.3.3
Recall Examples 7.2.1 and 7.3.1. We can calculate the variance of the waiting
time for a train using the probability density function
Recall that in Chap. 3, we introduced the sample variance and the sample
standard deviation. We already know that the standard deviation has the same
unit of measurement as the variable, whereas the unit of the variance is the
square of the measurement unit. The standard deviation measures how the values
of a random variable are dispersed around the population mean. A low value of
the standard deviation indicates that the values are highly concentrated around
the mean. A high value of the standard deviation indicates lower concentration
of the data values around the mean, and the observed values may be far away
from the mean. These considerations are helpful in making connections between
random variables and samples of data, see Chap. 9 for the construction of
confidence intervals.
Example 7.3.4
Recall Example 7.3.2 where we calculated the expectation of a die roll
experiment as . With and
, the variance for this example corresponds to
Theorem 7.3.1
The variance of a random variable X can be expressed as
(7.20)
The proof is given in Appendix C.2.
Example 7.3.5
In Examples 7.2.1, 7.3.1, and 7.3.3, we evaluated the waiting time for a train
using the PDF
We calculated the expectation and variance in Eqs. (7.15) and (7.17) as 10 min
and min , respectively. Theorem 7.3.1 tells us that we can calculate the
variance in a different way as follows:
This yields the same result as Eq. (7.18) but is much quicker.
Definition 7.3.4
The value for which the cumulative distribution function is
(7.21)
is called the p-quantile .
It follows from Definition 7.3.4 that is the value which divides the cumulative
distribution function into two parts: the probability of observing a value left of
is p, whereas the probability of observing a value right of is . For
example, the 0.25-quantile describes the x-value for which the probability
of observing or any smaller value is 0.25. Figure 7.4 shows the 0.25-
quantile (first quartile), the 0.5-quantile (median), and the 0.75-quantile (third
quartile) in a cumulative distribution function.
Example 7.3.6
Recall Examples 7.2.1, 7.3.1, 7.3.5 and Fig. 7.1b where we described the waiting
time for a train by using the following CDF:
For continuous variables, there is a unique value which describes the p-quantile.
However, for discrete variables, this may not necessarily be true. In this case, the
p-quantile is chosen such that
holds.
Example 7.3.7
The cumulative distribution function for rolling a die is described in Example
7.2.3 and Fig. 7.3b. The first quartile is 2 because and
for .
7.3.4 Standardization
Standardization transforms a random variable in such a way that it has an
expectation of zero and a variance of one. More details on the need for
standardization are discussed in Chap. 10.
Definition 7.3.5
A random variable Y is called standardized when
Theorem 7.3.2
Suppose a random variable X has mean and . Then, it can be
standardized as follows:
(7.22)
Example 7.3.8
In Examples 7.2.1, 7.3.1, and 7.3.5, we considered the waiting time X for a train.
The random variable X can take values between 0 and 20 min, and we calculated
and . The standardized variable of X is
One can show that and , see also Exercise 7.10 for more
details.
Theorem 7.4.1
(Tschebyschev’s inequality) Let X be a random variable with and
. It holds that
(7.23)
This is equivalent to
(7.24)
The proof is given in Appendix C.2.
Example 7.4.1
In Examples 7.2.1, 7.3.1, and 7.3.5, we have worked with a random variable
which describes the waiting time for a train. We determined and
. We can calculate the probability of waiting between and
min:
We can clearly see that Tschebyschev’s inequality gives us the correct answer,
that is is greater 0.32. Nevertheless, the approximation to the exact
probability, 0.7, is rather poor. One needs to keep in mind that only the lack of
distributional knowledge makes the inequality useful.
Y
1 2 J Total
1 ...
2 ...
Total 1
Example 7.5.1
Suppose we have a contingency table on smoking behaviour X (
, and ) and education
level Y ( , and ):
Y
1 2 3 Total
X 1 0.10 0.20 0.30 0.60
2 0.10 0.10 0.10 0.30
3 0.08 0.01 0.01 0.10
Total 0.28 0.31 0.41 1
The cell entries represent the joint distribution of smoking behaviour and
education level. We can interpret each entry as the probability of observing
and simultaneously. For example, (“smoking sometimes
and tertiary education”) . The marginal distribution of X is contained in the
last column of the table and lists the probabilities of smoking (unconditional on
education level), e.g. the probability of being a non-smoker in this population is
60 %. We can also interpret the conditional distributions: represents
the distribution of smoking behaviour among those who have tertiary education.
If we are interested in the probability of smoking sometimes given tertiary
education is completed, then we calculate .
Definition 7.5.1
A bivariate random variable (X, Y) is continuous if there is a function
such that
(7.25)
holds.
and
The last condition is sometimes referred to as the rectangle inequality. As in
the univariate case, we can use the cumulative distribution function to calculate
interval probabilities; similarly, we look at the rectangular area defined by
, and in the bivariate case (instead of an interval
[a, b]), see Fig. 7.5.
The conditional distributions can be obtained by the ratio of the joint and
marginal distributions:
Example 7.5.2
Consider the function
Suppose X and Y represent the concentrations of two drugs in the human body.
Then, may represent the sum of two drug concentrations in the human
body. Since there are infinite possible realizations of both X and Y, we represent
their joint distribution in a figure rather than a table, see Fig. 7.6a.
Fig. 7.6 Joint and marginal distribution for Example 7.5.2
Figure 7.6b depicts the marginal distribution for X. The slope of the marginal
distribution is essentially the slope of the surface of the joint distribution shown
in Fig. 7.6a. It is easy to see in this simple example that the marginal distribution
of X is nothing but a cut in the surface of the joint distribution. Note that the
conditional distributions and can be easily calculated; for
example, .
Stochastic Independence.
Definition 7.5.2
Two continuous random variables X and Y are said to be stochastically
independent if
(7.26)
For discrete variables, this is equivalent to
(7.27)
being valid for all (i, j).
Example 7.5.3
In Example 7.5.2, we considered the function
Example 7.6.1
Consider again Example 7.2.3 where we illustrated how the outcome of a die roll
experiment can be captured by a random variable. There were 6 events, and X
could take the values . The probability of the occurrence
of any number was , and the expectation was calculated as 3.5.
Consider two different situations:
(i) Suppose the die takes the value 10, 20, 30, 40, 50, and 60 instead of the
values 1, 2, 3, 4, 5, and 6. The random variable describes this
suitably, and its expectation is
(ii) If we are rolling two dices and , then the expectation for the sum of
the two outcomes is
due to (7.31).
Calculation Rules for the Variance. Let a and b be any known constants and X
be a random variable (discrete or continuous). Then, we have the following
rules:
(7.32)
(7.33)
(7.34)
The proof of rule (7.34) is given in Appendix C.2.
Example 7.6.2
In Examples 7.2.1, 7.3.1, 7.3.3, and 7.3.5, we evaluated a random variable
describing the waiting time for a train. Now, suppose that a person first has to
catch a bus to get to the train station. If this bus arrives only every 60 min, then
the PDF of the random variable Y denoting the waiting time for the bus is
We can use Eqs. (7.15) and (7.17) to determine both the expectation and
variance of Y. However, the waiting time for the bus is governed by the relation
where X is the waiting time for the train. Therefore, we can calculate
min by using rule (7.29) and the variance as
using rule (7.33). The total waiting time
is the sum of the two waiting times.
(7.35)
If we apply (7.34) and recall that the variables are independent of each other, we
can also calculate the variance as
(7.36)
Example 7.6.3
If we toss a coin, we obtain either head or tail, and therefore,
. If we toss the coin n times, we have for each toss
It is straightforward to calculate the expectation and variance for each coin toss:
and
With this example, the interpretation of formulae (7.35) and (7.36) becomes
clearer: if the probability of head is 0.5 for a single toss, then it is also 0.5 for the
mean of all tosses. If we toss a coin many times, then the variance decreases
when n increases. This means that a larger sample size yields a higher precision
for the calculated arithmetic mean. This observation shows the basic conclusion
of the next chapter: the higher the sample size, the more secure we are of our
conclusions.
7.7.1 Covariance
Definition 7.7.1
The covariance between X and Y is defined as
(7.37)
(i) ,
(ii) ,
(iii) ,
Theorem 7.7.1
(Additivity Theorem) The variance of the sum (subtraction) of X and Y is given
by
Example 7.7.1
Recall Example 7.6.2 where we considered the waiting time Y for a bus to the
train station and the waiting time X for the waiting time for a train. Suppose their
joint bivariate probability density function can be written as
This makes sense as the waiting times for the train and the bus should be
independent of each other. Using rule (7.31), we conclude that the total expected
waiting time is
Theorem 7.7.2
If X and Y are independent, they are also uncorrelated. However, if they are
uncorrelated then they are not necessarily independent.
Example 7.7.2
In Example 7.6.2, we estimated the covariance between the waiting time for the
bus and the waiting time for the train: . The correlation coefficient is
therefore also 0 indicating no linear relationship between the waiting times for
bus and train.
Exercise 7.2
Joey manipulates a die to increase his chances of winning a board game against
his friends. In each round, a die is rolled and larger numbers are generally an
advantage. Consider the random variable X denoting the outcome of the rolled
die and the respective probabilities , and
.
(b) Imagine that the board game contains an action which makes the players
use 1 / X rather than X. What is the expectation of ? Is
?
Exercise 7.3
An innovative winemaker experiments with new grapes and adds a new wine to
his stock. The percentage sold by the end of the season depends on the weather
and various other factors. It can be modelled using the random variable X with
the CDF as
(c) What is the probability of selling at least one-third of his wine, but not
more than two thirds?
(d) Define the CDF in R and calculate the probability of c) again.
Exercise 7.4
A quality index summarizes different features of a product by means of a score.
Different experts may assign different quality scores depending on their
experience with the product. Let X be the quality index for a tablet. Suppose the
respective probability density function is given as follows:
(d) Use Tschebyschev’s inequality to determine the probability that X does not
deviate more than 0.5 from its expectation.
Exercise 7.5
Consider the joint PDF for the type of customer service X (0 telephonic
hotline, 1 Email) and of satisfaction score Y (1 unsatisfied, 2 satisfied, 3
very satisfied):
1 2 3
0 0 1/2 1/4
1 1 / 6 1 / 12 0
(a) Determine and interpret the marginal distributions of both X and Y.
(c) Determine and interpret the conditional distribution of satisfaction level for
.
Exercise 7.6
Consider a continuous random variable X with expectation 15 and variance 4.
Determine the smallest interval which contains at least 90 % of the
values of X.
Exercise 7.7
Let X and Y be two random variables for which only 6 possible events—
—are defined:
Exercise 7.8
Recall the urn model we introduced in Chap. 5. Consider an urn with eight balls:
four of them are white, three are black, and one is red. Now, two balls are drawn
from the urn. The random variables X and Y are defined as follows:
(a) When are X and Y independent—when the two balls are drawn with
replacement or without replacement?
(b) Assume the balls are drawn such that X and Y are dependent. Use the
conditional distribution P(Y|X) to determine the joint PDF of X and Y.
Exercise 7.9
If X is the amount of money spent on food and other expenses during a day (in €)
and Y is the daily allowance of a businesswoman, the joint density of these two
variables is given by
(c) Calculate the probability that more than €75 are spent.
Exercise 7.10
Consider n i.i.d. random variables with and and the
standardized variable . Show that and .
Source Toutenburg, H., Heumann, C., Induktive Statistik, 4th edition, 2007,
Springer, Heidelberg
© Springer International Publishing Switzerland 2016
Christian Heumann, Michael Schomaker and Shalabh, Introduction to Statistics and Data Analysis ,
DOI 10.1007/978-3-319-46162-5_8
8. Probability Distributions
Christian Heumann1 , Michael Schomaker2 and Shalabh3
(1) Department of Statistics, Ludwig-Maximilians-Universität München,
München, Germany
(2) Centre for Infectious Disease Epidemiology and Research, University of
Cape Town, Cape Town, South Africa
(3) Department of Mathematics and Statistics, Indian Institute of Technology
Kanpur, Kanpur, India
Christian Heumann
Email: Christian.heumann@stat.uni-muenchen.de
Definition 8.0.1
The random variables are called independent and identically
distributed (i.i.d) if the have the same marginal cumulative
distribution function F(x) and if they are mutually independent.
Example 8.0.1
Suppose a researcher plans a survey on the weight of newborn babies in a
country. The researcher randomly contacts 10 hospitals with a maternity ward
and asks them to randomly select 20 of the newborn babies (no twins) born in
the last 6 months and records their weights. The sample therefore consists of
baby weights. Since the hospitals and the babies are randomly
selected, the babies’ weights are therefore not known beforehand. The 200
weights can be denoted by the random variables . Note that the
weights are random variables because, depending on the size of the
population, different samples consisting of 200 babies can be randomly selected.
Also, the babies’ weights can be seen as stochastically independent (an example
of stochastically dependent weights would be the weights of twins if they are
included in the sample). After collecting the weights of 200 babies, the
researcher has a sample of 200 realized values (i.e. the weights in grams). The
values are now known and denoted by .
Definition 8.1.1
A discrete random variable X with k possible outcomes is said to
follow a discrete uniform distribution if the probability mass function (PMF) of
X is given by
(8.1)
If the outcomes are the natural numbers ( ), the mean and
variance of X are obtained as
(8.2)
(8.3)
Example 8.1.1
If we roll a fair die, the outcomes “1”, “2”, , “6” have equal probability of
occurring, and hence, the random variable X “number of dots observed on the
upper surface of the die” has a uniform discrete distribution with PMF
A bar chart of the frequency distribution of the 1000 sampled numbers with
the possible outcomes (2, 5, 8, 10) using the discrete uniform distribution is
given in Fig. 8.1. We see that the 1000 generated random numbers are not
exactly uniformly distributed, e.g. the numbers 5 and 10 occur more often than
the numbers 2 and 8. In fact, they are only approximately uniform. We expect
that the deviance from a perfect uniform distribution is getting smaller as we
generate more and more random numbers but will probably never be zero for a
finite number of draws. The random numbers reflect the practical situation that a
sample distribution is only an approximation to the theoretical distribution from
which the sample was drawn. More details on how to work with random
variables in R are given in Appendix A.3.
Further, and .
The degenerate distribution indicates that there is only one possible fixed
outcome, and therefore, no randomness is involved. It follows that we need at
least two different possible outcomes to have randomness in the observations of
a random variable or random experiment. The Bernoulli distribution is such a
distribution where there are only two outcomes, e.g. success and failure or male
and female. These outcomes are usually denoted by the values “0” and “1”.
A Bernoulli distribution is useful when there are only two possible outcomes,
and our interest lies in any of the two outcomes, e.g. whether a customer buys a
certain product or not, or whether a hurricane hits an island or not. The outcome
of an event A is usually coded as 1 which occurs with probability p. If the event
of interest does not occur, i.e. the complementary event occurs, the outcome is
coded as 0 which occurs with probability . So p is the probability that the
event of interest A occurs.
Example 8.1.2
A company organizes a raffle at an end-of-year function. There are 300 lottery
tickets in total, and 50 of them are marked as winning tickets. The event A of
interest is “ticket wins” (coded as ), and the probability p of having a
winning ticket is a priori (i.e. before any lottery ticket has been drawn)
According to (8.4) and (8.5), the mean (expectation) and variance of X are
Example 8.1.3
Consider a coin tossing experiment where a coin is tossed ten times and the
event of interest is “head”. The random variable X “number of heads in 10
experiments” has the possible outcomes . A question of interest
may be: What is the probability that a head occurs in 7 out of 10 trials; or in 5
out of 10 trials? We assume that the order in which heads (and tails) appear is
not of interest, only the total number of heads is of interest.
white and balls are black. Since the balls are drawn with replacement,
every outcome of the n experiments is independent of all others. The probability
that , can therefore be calculated as
(8.6)
Please note that we can use the product because the draws are
Definition 8.1.4
A discrete random variable X is said to follow a binomial distribution with
parameters n and p if its PMF is given by (8.6). We also write . The
mean and variance of a binomial random variable X are given by
(8.7)
(8.8)
Remark 8.1.1
A Bernoulli random variable is therefore B(1; p) distributed.
Example 8.1.4
Consider an unfair coin where the probability of observing a tail (T) is
. Let us denote tails by “1” and heads by “0”. Suppose the coin is tossed three
times. In total, there are the following possible outcomes:
Note that the first outcome, viz. (1, 1, 1) leads to , the next 3 outcomes,
viz., (1, 1, 0), (1, 0, 1), (0, 1, 1) obtained by ( ) lead to , the next 3
outcomes, viz., (1, 0, 0), ((0, 1, 0), (0, 0, 1) obtained by ( ) lead to ,
Functions for the binomial distribution, as well as many other distributions, are
implemented in R. For each of these distributions, we can easily determine the
density function (PMF, PDF) for given values and parameters, determine the
CDF, calculate quantiles and draw random numbers. Appendix A.3 gives more
details. Nevertheless, we illustrate the concept of dealing with distributions in R
in the following example.
Example 8.1.5
Suppose we roll an unfair die 50 times with the probability of a tail .
We thus deal with a B(50, 0.6) distribution which can be plotted using the
dbinom command. The prefix d stands for “density”.
Note that we can also calculate the CDF with R. We can use the
pbinom(x,n,p) command, where the prefix p stands for probability, to
calculate the CDF at any point. For example, suppose we are interested in
, that is the probability of observing thirty or more tails; then
we write
The binomial distribution has some nice properties. One of them is described in
the following theorem:
Theorem 8.1.1
Let and and assume that X and Y are (stochastically)
independent. Then
(8.9)
This is intuitively clear since we can interpret this theorem as describing the
additive combination of two independent binomial experiments with n and m
trials, with equal probability p, respectively. Since every binomial experiment is
a series of independent Bernoulli experiments, this is equivalent to a series of
independent Bernoulli trials with constant success probability p which in
turn is equivalent to a binomial distribution with trials.
Definition 8.1.5
A discrete random variable X is said to follow a Poisson distribution with
parameter if its PMF is given by
(8.10)
We also write . The mean and variance of a Poisson random variable
are identical:
Example 8.1.6
Suppose a country experiences tropical storms on average per year. Then
the probability of suffering from only two tropical storms is obtained by using
the Poisson distribution as
If we are interested in the probability that not more than 2 storms are
experienced, then we can apply rules (7.7)–(7.13) from Chap. 7:
. We can calculate
and from (8.10) or using R. Similar to Example 8.1.5, we use
the prefix d to obtain the PMF and the prefix p to work with the CDF, i.e. we
can use and to determine and ,
respectively.
with . Since several events can occur, the outcome of one (of the n)
experiments is conveniently described by binary indicator variables. Let
, denote the event “ is observed in experiment i”, i.e.
with “1” being present in only one position, i.e. in position j, if occurs in
experiment i. Now, define (for each ) . Then, is
counting how often event was observed in the n independent experiments (i.e.
how often was 1 in the n experiments).
Definition 8.1.6
The random vector is said to follow a multinomial
distribution if its PMF is given as
(8.11)
with the restrictions and . We also write .
The mean of is the (component-wise) vector
Remark 8.1.2
Due to the restriction that are not stochastically
independent which is reflected by the negative covariance. This is also
intuitively clear: if one gets higher, another , has to become lower to
satisfy the restrictions.
Example 8.1.7
Consider a simple example of the urn model. The urn contains 50 balls of three
colours: 25 red balls, 15 white balls, and 10 black balls. The balls are drawn
from the urn with replacement. The balls are placed back into the urn after every
draw, which means the draws are independent. Therefore, the probability of
drawing a red ball in every draw is . Analogously, (for white
balls) and (for black balls). Consider draws. The probability of the
random event of drawing “2 red balls, 1 white ball, and 1 black ball” is:
(8.12)
We would have obtained the same result in R using the dmultinom function:
Remark 8.1.3
In contrast to most of the distributions, the CDF of the multinomial distribution,
i.e. the function calculating , is not contained in the
base R-distribution. Please note that for , the multinomial distribution
reduces to the binomial distribution.
Example 8.1.8
Let us consider an experiment where a coin is tossed until “head” is obtained for
the first time. The probability of getting a head is for each toss. Using
(8.13), we can determine the following probabilities:
M white balls
black balls
N total balls
i.e. we do not place a ball back into the urn once it is drawn. The order in
which the balls are drawn is assumed to be of no interest; only the number of
drawn white balls is of relevance. We define the following random variable
To be more precise, among the n drawn balls, x are white and are black.
There are possibilities to choose x white balls from the total of M white
balls from the total of black balls. In total, we draw n out of N balls.
Recall the probability definition of Laplace as the number of simple favourable
events divided by all possible events. The number of combinations for all
possible events is ; the number of favourable events is
(8.14)
for .
Definition 8.1.8
A random variable X is said to follow a hypergeometric distribution with
parameters n, M, N, i.e. , if its PMF is given by (8.14).
Example 8.1.9
The German national lottery draws 6 out of 49 balls from a rotating bowl. Each
ball is associated with a number between 1 and 49. A simple bet is to choose 6
numbers between 1 and 49. If 3 or more chosen numbers correspond to the
numbers drawn in the lottery, then one wins a certain amount of money. What is
the probability of choosing 4 correct numbers? We can utilize the
hypergeometric distribution with , and to calculate such
probabilities. The interpretation is that we “draw” (i.e. bet on) 4 out of the 6
winning balls and “draw” (i.e. bet on) another 2 out of the remaining 43 ( )
losing balls. In total, we draw 6 out of 49 balls. Calculating the number of the
favourable combinations and all possible combinations leads to the application
of the hypergeometric distribution as follows:
We would have obtained the same results using the dhyper command. Its
arguments are x, M, N, n, and thus, we specify
The H(6, 43, 6) distribution is also visualized in Fig. 8.3. It is evident that the
cumulative probability of choosing 2 or fewer correct numbers is greater than
0.9 (or 90 %), but it is very unlikely to have 3 or more numbers right. This may
explain why the national lottery pays out money only for 3 or more correct
numbers.
Definition 8.2.1
A continuous random variable X is said to follow a (continuous) uniform
distribution in the interval [a, b], i.e. , if its probability density
function (PDF) is given by
respectively.
Example 8.2.1
Suppose a train arrives at a subway station regularly every 10 min. If a passenger
arrives at the station without knowing the timetable, then the waiting time to
catch the train is uniformly distributed with density
Definition 8.2.2
A random variable X is said to follow a normal distribution with parameters
and if its PDF is given by
(8.15)
We write . The mean and variance of X are
The density of a normal distribution has its maximum (see Fig. 8.4) at . The
density is also symmetric around . The inflexion points of the density are at
and (Fig. 8.4). A lower indicates a higher concentration around
the mean . A higher indicates a flatter density (Fig. 8.5).
Fig. 8.4 PDF of a normal distribution
Fig. 8.5 PDF of N(0, 2), N(0, 1) and N(0, 0.5) distributions
(8.16)
which is often denoted as . The value of for various values of x can
be obtained in R following the rules introduced in Appendix A.3. For example,
Remark 8.2.1
There is no explicit formula to solve the integral in Eq. (8.16). It has to be solved
by numerical (or computational) methods. This is the reason why CDF tables are
presented in almost all statistical textbooks, see Table C.1 in Appendix C.
Example 8.2.2
An orange farmer sells his oranges in wooden boxes. The weights of the boxes
vary and are assumed to be normally distributed with kg and kg .
The farmer wants to avoid customers being unsatisfied because the boxes are too
low in weight. He therefore asks the following question: What is the probability
that a box with a weight of less than 13 kg is sold? Using the
command in R, we get
(8.17)
every normally distributed random variable can be transformed into a
standard normal random variable. We call this transformation the Z-
transformation. We can use this transformation to derive convenient calculation
rules. The probability for is
(8.18)
Consequently, the probability for is
(8.19)
The probability that X realizes a value in the interval [a, b] is
(8.20)
Because of the symmetry of the probability density function around its
mean 0, the following equation holds for the distribution function of a
standard normal random variable for any value a:
(8.21)
It follows that , see also Fig. 8.6.
Fig. 8.6 Distribution function of the standard normal distribution
Example 8.2.3
Recall Example 8.2.2 where a farmer sold his oranges. He was interested in
for . Using (8.17), we get
and variance
(8.22)
where . In summary, we get
Remark 8.2.2
In fact, in Eq. (8.22), we have used the fact that the sum of normal random
variables also follows a normal distribution, i.e.
In general, it cannot be taken for granted that the sum of two random variables
follows the same distribution as the two variables themselves. As an example,
consider the sum of two independent uniform distributions with and
. It holds that and
, but is obviously not uniformly distributed.
Definition 8.2.3
A random variable X is said to follow an exponential distribution with parameter
if its PDF is given by
(8.23)
We write . The mean and variance of an exponentially distributed
random variable X are
(8.24)
For example, suppose someone stands in a supermarket queue for t minutes. Say
the person forgot to buy milk, so she leaves the queue, gets the milk, and stands
in the queue again. If we use the exponential distribution to model the waiting
time, we say that it does not matter what time it is: the random variable “waiting
time from standing in the queue until paying the bill” is not influenced by how
much time has elapsed already; it does not matter if we queued before or not.
Please note that the memorylessness property is shared by the geometric and the
exponential distributions.
There is also a relationship between the Poisson and the exponential
distribution:
Theorem 8.2.1
The number of events Y occurring within a continuum of time is Poisson
distributed with parameter if and only if the time between two events is
exponentially distributed with parameter .
The continuum of time depends on the problem at hand. It may be a second, a
minute, 3 months, a year, or any other time period.
Example 8.2.4
Let Y be the random variable which counts the “number of accesses per second
for a search engine”. Assume that Y is Poisson distributed with parameter
( ). The random variable X, “waiting time until the next
access”, is then exponentially distributed with parameter . We therefore
get
8.3.1 -Distribution
Definition 8.3.1
Let be n independent and identically N (0, 1)-distributed random
variables. The sum of their squares, , is then -distributed with n
degrees of freedom and is denoted as . The PDF of the -distribution is given
in Eq. (C.7) in Appendix C.3.
Theorem 8.3.1
Consider two independent random variables which are - and -distributed,
respectively. The sum of these two random variables is -distributed.
(8.25)
8.3.2 t-Distribution
Definition 8.3.2
Let X and Y be two independent random variables where and .
The ratio
Theorem 8.3.2
(Student’s theorem) Let with . The ratio
(8.26)
(8.27)
follows the Fisher F -distribution with (m, n) degrees of freedom. The PDF of
the F-distribution is given in Eq. (C.9) in Appendix C.3.
(c) The packages contain three toys. What is the probability that among the 5
packages that are given to the family’s youngest daughter, she finds two
toys?
Exercise 8.2
A study on breeding birds collects information such as the length of their eggs
(in mm). Assume that the length is normally distributed with mm and
. What is the probability of
Exercise 8.3
A dodecahedron is a die with 12 sides. Suppose the numbers on the die are 1–12.
Consider the random variable X which describes which number is shown after
rolling the die once. What is the distribution of X? Determine and .
Exercise 8.4
Felix states that he is able to distinguish a freshly ground coffee blend from an
ordinary supermarket coffee. One of his friends asks him to taste 10 cups of
coffee and tell him which coffee he has tasted. Suppose that Felix has actually no
clue about coffee and simply guesses the brand. What is the probability of at
least 8 correct guesses?
Exercise 8.5
An advertising board is illuminated by several hundred bulbs. Some of the bulbs
are fused or smashed regularly. If there are more than 5 fused bulbs on a day, the
owner of the board replaces them, otherwise not. Consider the following data
collected over a month which captures the number of days ( ) on which i bulbs
were broken:
Fused bulbs 0 1 2 3 4 5
6 8 8 5 2 1
(b) What is the average number of broken bulbs per day? What is the variance?
(c) Determine the probabilities using the distribution you chose in (a)
and using the average number of broken bulbs you calculated in (b).
Compare the probabilities with the proportions obtained from the data.
(d) Calculate the probability that at least 6 bulbs are fused, which means they
need to be replaced.
(e) Consider the random variable Y: “time until next bulb breaks”. What is the
distribution of Y?
Exercise 8.6
Marco’s company organizes a raffle at an end-of-year function. There are 4000
raffle tickets to be sold, of which 500 win a prize. The price of each ticket is
€1.50. The value of the prizes, which are mostly electrical appliances produced
by the company, varies between €80 and €250, with an average value of €142.
(c) Given the value of the prizes and the costs of the tickets, is it worth taking
part in the raffle?
Exercise 8.7
A country has a ratio between male and female births of 1.05 which means that
51.22 % of babies born are male.
(a) What is the probability for a mother that the first girl is born during the first
three births?
Exercise 8.8
A fishermen catches, on average, three fish in an hour. Let Y be a random
variable denoting the number of fish caught in one hour and let X be the time
interval between catching two fishes. We assume that X follows an exponential
distribution.
Exercise 8.9
A restaurant sells three different types of dessert: chocolate, brownies, yogurt
with seasonal fruits, and lemon tart. Years of experience have shown that the
probabilities with which the desserts are chosen are 0.2, 0.3, and 0.5,
respectively.
(a) What is the probability that out of 5 guests, 2 guests choose brownies, 1
guest chooses yogurt, and the remaining 2 guests choose lemon tart?
(b) Suppose two out of the five guests are known to always choose lemon tart.
What is the probability of the others choosing lemon tart as well?
Exercise 8.10
A reinsurance company works on a premium policy for natural disasters. Based
on experience, it is known that “number of natural disasters from October
to March” (winter) is Poisson distributed with . Similarly, the random
variable “number of natural disasters from April to September” (summer) is
Poisson distributed with . Determine the probability that there is at least 1
disaster during both summer and winter based on the assumption that the two
random variables are independent.
Exercise 8.11
Read Appendix C.3 to learn about the Theorem of Large Numbers and the
Central Limit Theorem.
(a) Draw 1000 realizations from a standard normal distribution using R and
calculate the arithmetic mean. Repeat this process 1000 times. Evaluate the
distribution of the arithmetic mean by drawing a kernel density plot and by
calculating the mean and variance of it.
(c) Repeat the procedure in (b) using 10,000 rather than 1000 realizations.
How do the results change and why?
Source Toutenburg, H., Heumann, C., Induktive Statistik, 4th edition, 2007,
Springer, Heidelberg
Part III
Inductive Statistics
© Springer International Publishing Switzerland 2016
Christian Heumann, Michael Schomaker and Shalabh, Introduction to Statistics and Data Analysis ,
DOI 10.1007/978-3-319-46162-5_9
9. Inference
Christian Heumann1 , Michael Schomaker2 and Shalabh3
(1) Department of Statistics, Ludwig-Maximilians-Universität München,
München, Germany
(2) Centre for Infectious Disease Epidemiology and Research, University of
Cape Town, Cape Town, South Africa
(3) Department of Mathematics and Statistics, Indian Institute of Technology
Kanpur, Kanpur, India
Christian Heumann
Email: Christian.heumann@stat.uni-muenchen.de
9.1 Introduction
The first four chapters of this book illustrated how one can summarize a data set
both numerically and graphically. The validity of interpretations made from such
a descriptive analysis is valid only for the data set under consideration and
cannot necessarily be generalized to other data. However, it is desirable to make
conclusions about the entire population of interest and not only about the sample
data. In this chapter, we describe the framework of statistical inference which
allows us to infer from the sample data about the population of interest–at a
given, prespecified uncertainty level–and knowledge about the random process
generating the data.
Consider an example where the objective is to forecast an election outcome.
This requires us to determine the proportion of votes that each of the k
participating parties is going to receive, i.e. to calculate or estimate .
If it is possible to ask every voter about their party preference, then one can
simply calculate the proportions for each party. However, it is
logistically impossible to ask all eligible voters (which form the population in
this case) about their preferred party. It seems more realistic to ask only a small
fraction of voters and infer from their responses to the responses of the whole
population. It is evident that there might be differences in responses between the
sample and the population—but the more voters are asked, the closer we are to
the population’s preference, i.e. the higher the precision of our estimates for
(the meaning of “precision” will become clearer later in this
chapter). Also, it is intuitively clear that the sample must be a representative
sample of the voters’ population to avoid any discrepancy or bias in the
forecasting. When we speak of a representative sample, we mean that all the
characteristics present in the population are contained in the sample too. There
are many ways to get representative random samples. In fact, there is a branch of
statistics, called sampling theory, which studies this subject [see, e.g. Groves et
al. (2009) or Kauermann and Küchenhoff (2011) for more details]. A simple
random sample is one where each voter has an equal probability of being
selected in the sample and each voter is independently chosen from the same
population. In the following, we will assume that all samples are simple random
samples. To further formalize the election forecast problem, assume that we are
interested in the true proportions which each party receives on the election day.
It is practically impossible to make a perfect prediction of these proportions
because there are too many voters to interview, and moreover, a voter may
possibly make their final decision possibly only when casting the vote and not
before. The voter may change his/her opinion at any moment and may differ
from what he/she claimed earlier. In statistics, we call these true proportions
parameters of the population. The task is then to estimate these parameters on
the basis of a sample. In the election example, the intuitive estimates for the
proportions in the population are the proportions in the sample and we call them
sample estimates. How to find good and precise estimates are some of the
challenges that are addressed by the concept of statistical inference. Now, it is
possible to describe the election forecast problem in a statistical and operational
framework: estimate the parameters of a population by calculating the sample
estimates. An important property of every good statistical inference procedure is
that it provides not only estimates for the population parameters but also
information about the precision of these estimates.
Consider another example in which we would like to study the distribution of
weight of children in different age categories and get an understanding of the
“normal” weight. Again, it is not possible to measure the weight of all the
children of a specific age in the entire population of children in a particular
country. Instead, we draw a random sample and use methods of statistical
inference to estimate the weight of children in each age group. More specifically,
we have several populations in this problem. We could consider all boys of a
specific age and all girls of a specific age as two different populations. For
example, all 3-year-old boys will form one possible population. Then, a random
sample is drawn from this population. It is reasonable to assume that the
distribution of the weight of k-year-old boys follows a normal distribution with
some unknown parameters and . Similarly, another population of k-year-
old girls is assumed to follow a normal distribution with some unknown
parameters and . The indices kb and kg are used to emphasize that the
parameters may vary by age and gender. The task is now to calculate the
estimates of the unknown parameters (in the population) of the normal
distributions from the samples. Using quantiles, a range of “normal” weights can
then be specified, e.g. the interval from the 1 % quantile to the 99 % quantile of
the estimated normal distribution or, alternatively, all weights which are not
more than twice the standard deviation away from the mean. Children with
weights outside this interval may be categorized as underweight or overweight.
Note that we make a specific assumption for the distribution class; i.e. we
assume a normal distribution for the weights and estimate its parameters. We call
this a parametric estimation problem because it is based on distributional
assumptions. Otherwise, if no distributional assumptions are made, we speak of
a nonparametric estimation problem.
(9.1)
The index denotes that the expectation is calculated with respect to the
distribution whose parameter is . The bias of an estimator T(X) is defined as
(9.2)
It follows that an estimator is said to be unbiased if its bias is zero.
Definition 9.2.2
The variance of T(X) is defined as
(9.3)
Both bias and variance are measures which characterize the properties of an
estimator. In statistical theory, we search for “good” estimators in the sense that
the bias and the variance are as small as possible and therefore the accuracy is as
high as possible. Readers interested in a practical example may consult
Examples 9.2.1 and 9.2.2, or the explanations for Fig. 9.1.
It turns out that we cannot minimize both measures simultaneously as there
is always a so-called bias–variance tradeoff. A measure which combines bias and
variance into one measure is the mean squared error.
Definition 9.2.3
The mean squared error (MSE) of T (X) is defined as
(9.4)
The expression (9.4) can be partitioned into two parts: the variance and the
squared bias, i.e.
(9.5)
This can be proven as follows:
Note that the calculation is based on the result that the cross product term is zero.
The mean squared error can be used to compare different biased estimators.
Definition 9.2.4
An estimator is said to be MSE-better than another estimator for
estimating if
where and is the parameter space , i.e. the set of all possible values of .
Often, is or all positive real values . For example, for a normal
distribution, , can be any real value and has to be a number greater
than zero.
Unfortunately, we cannot find an MSE-optimal estimator in the sense that an
estimator is MSE-better than all other possible estimators for all possible values
of . This becomes clear if we define the constant estimator
(independent of the actual sample): if , i.e. if the constant value equals the
true population parameter we want to estimate, then the MSE of this constant
estimator is zero (but it will be greater than zero for all other values of , and the
bias increases more as we move c far away from the true ). Usually, we can
only find estimators which are locally best (in a certain subset of ). This is why
classical statistical inference restricts the search for best estimators to the class of
unbiased estimators. For unbiased estimators, the MSE is equal to the variance
of an estimator. In this context, the following definition is used for comparing
two (unbiased) estimators.
Definition 9.2.5
An unbiased estimator is said to be more efficient than another unbiased
estimator for estimating if
and
for at least one . It turns out that restricting our search of best estimators to
unbiased estimators is sometimes a successful strategy; i.e. for many problems, a
best or most efficient estimate can be found. If such an estimator exists, it is said
to be UMVU (uniformly minimum variance unbiased). Uniformly means that it
has the lowest variance among all other unbiased estimators for estimating the
population parameter(s) .
Consider the illustration in Fig. 9.1 to better understand the introduced concepts.
Suppose we throw three darts at a target and the goal is to hit the centre of the
target, i.e. the innermost circle of the dart board. The centre represents the
population parameter . The three darts play the role of three estimates
(based on different realizations of the sample) of the population parameter .
Four possible situations are illustrated in Fig. 9.1. For example, in Fig. 9.1b, we
illustrate the case of an estimator which is biased but has low variance: all three
darts are “far” away from the centre of the target, but they are “close” together. If
we look at Fig. 9.1a, c, we see that all three darts are symmetrically grouped
around the centre of the target, meaning that there is no bias; however, in
Fig. 9.1a there is much higher precision than in Fig. 9.1c. It is obvious that
Fig. 9.1a presents an ideal situation: an estimator which is unbiased and has
minimum variance.
Theorem 9.2.1
Let be an i.i.d. (random) sample of a random variable X with
population mean and population variance , for all
. Then the arithmetic mean is an unbiased estimator of
and the sample variance is an unbiased estimator of .
Note that the theorem holds, in general, for i.i.d. samples, irrespective of the
choice of the distribution of the ’s. Note again that we are looking at the
situation before we have any observations on X. Therefore, we again use capital
letters to denote that the ’s are random variables which are not known
beforehand (i.e. before we actually record the observations on our selected
sampling units).
Remark 9.2.1
The empirical variance is a biased estimate of and its bias
is .
Example 9.2.1
Let be identically and independently distributed variables whose
population mean is and population variance is . Then is an
unbiased estimator of . This can be shown as follows:
Now, we consider another example to illustrate that estimators may not always
be unbiased but may have the same variance.
Example 9.2.2
Let be identically and independently distributed variables whose
population mean is and population variance is . Then
is a biased estimator of . This can be shown as
follows:
(9.6)
Example 9.2.3
Suppose a random sample of size of the weight of 10-year-old children in
a particular city is drawn. Let us assume that the children’s weight in the
population follows a normal distribution . The sample provides the
following values of weights (in kg):
40.2, 32.8, 38.2, 43.5, 47.6, 36.6, 38.4, 45.5, 44.4, 40.3
34.6, 55.6, 50.9, 38.9, 37.8, 46.8, 43.6, 39.5, 49.9, 34.2
To obtain an estimate of the population mean , we calculate the arithmetic
mean of the observations as
Example 9.2.4
A library draws a random sample of size members from the members’
database to see how many members have to pay a penalty for returning books
late, i.e. . It turns out that 39 members in the sample have to pay a penalty.
Therefore, an unbiased estimator of the population proportion of all members of
the library who return books late is
Remark 9.2.2
Unbiasedness and efficiency can also be defined asymptotically: we say, for
example, that an estimator is asymptotically unbiased, if the bias approaches
zero when the sample size tends to infinity. The concept of asymptotic efficiency
involves some mathematical knowledge which is beyond the intended scope of
this book. Loosely speaking, an asymptotic efficient estimator is an estimator
which achieves the lowest possible (asymptotic) variance under given
distributional assumptions. The estimators introduced in Sect. 9.3.1, which are
based on the maximum likelihood principle, have these properties (under certain
mathematically defined regularity conditions).
Definition 9.2.6
Let be a sequence of estimators for the parameter where
is a function of . The sequence is a
consistent sequence of estimators for if for every ,
or equivalently
This definition says that as the sample size n increases, the probability that is
getting closer to is approaching 1. This means that the estimator is getting
closer to the parameter as n grows larger. Note that there is no information on
how fast is converging to in the sense of convergence defined above.
Example 9.2.5
Let be identically and independently distributed variables with
expectation and variance . Then for , we have and
. For any , we can write the following:
and
Remark 9.2.3
We call this type of consistency weak consistency. Another definition is MSE
consistency, which says that an estimator is MSE consistent if as
. If the estimator is unbiased, it is sufficient that as . If
is MSE consistent, it is also weakly consistent. Therefore, it follows that
an unbiased estimator with its variance approaching zero as the sample size
approaches infinity is both MSE consistent and weakly consistent.
Definition 9.2.7
Let be a random sample from a probability density function (or
probability mass function) . A statistic T is said to be sufficient for if the
conditional distribution of given is independent of .
Theorem 9.2.2
(Neyman–Fisher Factorization Theorem (NFFT)) Let be a random
sample from a probability density function (or probability mass function) .
A statistic is said to be sufficient for if and only if the joint
density of can be factorized as
Example 9.2.6
Let be a random sample from where is unknown. We
attempt to find a sufficient statistic for . Consider the following function as the
joint distribution of (whose interpretation will become clearer in the
next section):
Here
Using the Neyman–Fisher Factorization Theorem, we conclude that
is a sufficient statistic for . Also,
is sufficient for as it is a one-to-one statistic of .
On the other hand, is not sufficient for as it is not a one-to-one function
of . The important point here is that is a function of the sufficient
statistic and hence a good estimator for . It is thus summarizing the sample
information about the parameter of interest in a complete yet parsimonious way.
Another, multivariate, example of sufficiency is given in Appendix C.4.
(9.7)
(9.8)
The joint density function of is called the likelihood function. For
better understanding, consider a sample of size 5 with
. The likelihood (function) is
(9.9)
The maximum likelihood estimation principle now says that the estimator of p
is the value of p which maximizes the likelihood (9.8) or (9.9). In other words,
the maximum likelihood estimate is the value which maximizes the probability
of observing the realized sample from the likelihood function. In general, i.e. for
any sample, we have to maximize the likelihood function (9.9) with respect to p.
We use the well-known principle of maxima–minima to maximize the likelihood
function in this case. In principle, any other optimization procedure can also be
used, for example numerical algorithms such as the Newton–Raphson algorithm.
If the likelihood is differentiable, the first-order condition for the maximum is
that the first derivative with respect to p is zero. For maximization, we can
transform the likelihood by a strictly monotone increasing function. This
guarantees that the potential maximum is taken at the same point as in the
original likelihood. A good and highly common choice is the natural logarithm
since it transforms products in sums and sums are easy to differentiate by
differentiating each term in the sum. The log-likelihood in our example is
therefore
(9.10)
(9.11)
where denotes the natural logarithm function and we use the rules
(9.12)
Setting (9.12) to zero and solving for p leads to
Remark 9.3.1
More examples of maximum likelihood estimators are given in Exercises
9.1–9.3.
(9.13)
The probability is called the confidence level or confidence
coefficient, is called the lower confidence bound or lower confidence
limit and is called the upper confidence bound or upper confidence
limit. It is important to note that the bounds are random and the parameter is a
fixed value. This is the reason why we say that the true parameter is covered by
the interval with probability and not that the probability that the interval
contains the parameter is . Please note that some software packages use the
term “error bar” when referring to confidence intervals.
Frequency interpretation of the confidence interval: Suppose N independent
samples , , of size n are sampled from the same population and
N confidence intervals of the form are calculated. If N is large
enough, then on an average of the intervals (9.13) cover the true
parameter.
Example 9.4.1
Let a random variable follow a normal distribution with and .
Suppose we draw a sample of observations repeatedly. The sample will
differ in each draw, and hence, the mean and the confidence interval will also
differ. The data sets are realizations from random variables. Have a look at
Fig. 9.3 which illustrates the mean and the 95 % confidence intervals for 6
random samples. They vary with respect to the mean and the confidence interval
width. Most of the means are close to , but not all. Similarly, most
confidence intervals, but not all, include . This is the idea of the frequency
interpretation of the confidence interval: different samples will yield different
point and interval estimates. Most of the times the interval will cover , but not
always. The coverage probability is specified by , and the frequency
interpretation means that we expect that (approximately) of the
intervals to cover the true parameter . In that sense, the location of the interval
will give us some idea about where the true but unknown population parameter
lies, while the length of the interval reflects our uncertainty about : the wider
the interval is, the higher is our uncertainty about the location of .
or
(9.15)
This is known as confidence interval for or the confidence
interval for with confidence coefficient .
We can use the R function qnorm or Table C.1 to obtain , see also
Sects. 8.4, A.3, and C.7. For example, for and we get =
= 1.96 and = = 2.576 using qnorm(0.975) and
qnorm(0.995). This gives us the quantiles we need to determine a 95 % and
99 % confidence interval, respectively.
Example 9.4.2
We consider again Example 9.2.3 where we evaluated the weight of 10-year-old
children. Assume that the variance is known to be 36; then the upper and lower
limits of a 95 % confidence interval for the expected weight can be calculated
as follows:
or
(9.16)
which is the confidence interval for or the confidence interval
for with confidence coefficient .
The interval (9.16) is, in general, wider than the interval (9.15) for identical
and identical sample size n, since the unknown parameter is estimated by
which induces additional uncertainty. The quantiles for the t-distribution can
be obtained using the R command qt or Table C.2.
Example 9.4.3
Consider Example 9.4.2 where we evaluated the weight of 10-year-old children.
We have already calculated the point estimate of as . With =
2.093, obtained via qt(0.975,19) or Table C.2, the upper and lower limits of
a 95 % confidence interval for are obtained as
(9.17)
This gives us
(9.18)
and we get a confidence interval for p as
(9.19)
Example 9.4.4
We look again at Example 9.2.4 where we evaluated the proportion of members
who had to pay a penalty. Out of all borrowers, 39 % brought back their books
late and thus had to pay a fee. A 95 % confidence interval for the probability p of
bringing back a book late can be constructed using the normal approximation,
since . With and ,
we get the 95 % confidence interval as
One can see that the exact and approximate confidence limits differ slightly
due to the normal approximation which approximates the exact binomial
probabilities.
Y Total (row)
a b
X c d
Total (column) n
In the spirit of the preceding sections, we can interpret the entries in this
contingency table as population parameters. For example, a describes the
absolute frequency of observations in the population for which and
. If we have a sample then we can estimate a by the number of observed
observations for which and . We can thus view to be an
estimator for a, to be an estimator for b, to be an estimator for c, and
to be an estimator for d. It follows that
(9.20)
serves as the point estimate for the population odds ratio . To
construct a confidence interval for the odds ratio, we need to work on a log-
scale. The log odds ratio,
(9.21)
takes the natural logarithm of the odds ratio. It is evident that it can be
estimated using the observed absolute frequencies of the joint frequency
distribution of X and Y:
(9.22)
It can be shown that follows approximately a normal distribution with
expectation and standard deviation
(9.23)
Following the reasoning explained in the earlier section on confidence
intervals for binomial probabilities, we can calculate the confidence
interval for under a normal approximation as follows:
(9.24)
Since we are interested in the confidence interval of the odds ratio, and not
the log odds ratio, we need to transform back the lower and upper bound of the
confidence interval as
(9.25)
Example 9.4.5
Recall Example 4.2.5 from Chap. 4 where we were interested in the association
of smoking with a particular disease. The data is summarized in the following
contingency table:
The odds ratio was estimated to be 2.76, and we therefore concluded that the
chances of having the particular disease is 2.76 times higher for smokers
compared with non-smokers. To calculate a 95 % confidence intervals, we need
, and
There are many ways to obtain the same results in R. One option is to use the
oddsratio function of the library epitools . Note that we need to specify
“wald” under the methods option to get confidence intervals which use the
normal approximation as we did in this case.
(9.26)
We would now like to fix the width of the confidence interval and come up
with a sample size which is required to achieve this width. Let us fix the length
of the confidence interval as
(9.27)
Assume we have knowledge of . The knowledge about can be
obtained, for example, through a pilot study or past experience with the
experiment. We are interested in obtaining the value of n for which a confidence
interval has a fixed confidence width of or less. Rearranging (9.27) gives us
(9.28)
This means a minimum or optimum sample size is
(9.29)
The sample size ensures that the confidence interval for has at
most length . But note that we have assumed that is known. If we do not
know (which is more likely in practice), we have to make an assumption
about it, e.g. by using an estimate from a former study, a pilot study, or other
external information. Practically, (9.28) is used in the case of known and
unknown .
Example 9.5.1
A call centre is interested in determining the expected length of a telephone call
as precisely as possible. The requirements are that the 95 % confidence interval
for should have a width of 1 min. Suppose that the call centre has developed a
pilot study in which was estimated to be 5 min. The sample size n that is
needed to estimate the expected length of the phone calls with the desired
precision is:
This means that at least 384 calls are required to get the desired confidence
interval width.
we get
(9.30)
Example 9.5.2
A factory may be interested in the probability of an error in an operating process.
The length of the confidence interval should be , i.e. . Suppose it is
speculated that the error probability is 10 %; we may then use as our
prior judgment for the true value of p. This yields
(9.31)
This means we need a sample size of at least 865 to obtain the desired width of
the confidence interval for p.
The above examples for both and p have shown us that without external
knowledge about the research question of interest, it is difficult to come up with
an appropriate sample size. Results may vary considerably depending on what
type of information is assumed to be known. With limited knowledge, it can be
useful to report results for different widths of confidence intervals and
hypothesized values of p or .
Sample size calculations can be highly complex in many practical situations
and may not remain as simple as in the examples considered here. For example,
Chap. 10 uses additional concepts in the context of hypothesis testing, such as
the power, which can be taken into consideration when estimating sample sizes.
However, in this case, calculations and interpretations become more difficult and
complex. A detailed overview of sample size calculations can be found in Chow
et al. (2007) and Bock (1997).
(b) What does the log-likelihood function look like for the following
realizations: Plot the function using R.
Hint: The curve command can be used to plot functions.
(c) Use the Neyman–Fisher Factorization Theorem to argue that the maximum
likelihood estimate obtained in (a) is a sufficient statistic for .
Exercise 9.2
Consider an i.i.d. sample of size n from a distributed random variable X.
(a) Determine the maximum likelihood estimator for under the assumption
that .
(b) Now determine the maximum likelihood estimator for for an arbitrary
.
Exercise 9.3
Let be n i.i.d. random variables which follow a uniform
distribution, . Write down the likelihood function and argue, without
differentiating the function, what the maximum likelihood estimate of is.
Exercise 9.4
Let be n i.i.d. random variables which follow an exponential
distribution. An intelligent statistician proposes to use the following two
estimators to estimate :
(ii) .
Exercise 9.5
A national park in Namibia determines the weight (in kg) of a sample of
common eland antelopes:
450 730 700 600 620 660 850 520 490 670 700 820
910 770 760 620 550 520 590 490 620 660 940 790
Calculate
Exercise 9.6
We are interested in the heights of the players of the two basketball teams “Brose
Baskets Bamberg” and “Bayer Giants Leverkusen” as well as the football team
“SV Werder Bremen”. The following summary statistics are given:
Calculate a 95 % confidence interval for for all three teams and interpret the
results.
Exercise 9.7
A married couple tosses a coin after each dinner to determine who has to wash
the dishes. If the coin shows “head”, then the husband has to wash the dishes,
and if the coin shows “tails”, then the wife has to wash the dishes. After 98
dinners, the wife notes that the coin has shown head 59 times.
(a) Estimate the probability that the wife has to wash the dishes.
(c) How many dinners are needed to estimate the true probability for the coin
showing “head” with a precision of ±0.5 % under the assumption that the
coin is fair?
Exercise 9.8
Suppose 93 out of 104 pupils have passed the final examination at a certain
school.
(b) At county level 3.2 % of pupils failed the examination. Are the school’s
pupils worse than those in the whole county?
Exercise 9.9
To estimate the audience rate for several TV stations, 3000 households are asked
to allow a device, which records which TV station is watched, to be installed on
their TVs. 2500 agreed to participate. Assume it is of interest to estimate the
probability of someone switching on the TV and watching the show “Germany’s
next top model”.
(a) What is the precision with which the probability can be estimated?
(a) He uses the decathlon data from this book (Appendix A.2) to come up with
. What sample size does he need to calculate a 95 %
confidence interval for the mean running time which is precise to s?
(c) The runner’s own best time is 10.86 s. He wants to be among the best 10 %
of all athletes. Calculate an appropriate confidence interval to compare his
time with the 10 % best times.
Exercise 9.11
Consider the pizza delivery data described in Chap. A.4. We distinguish between
pizzas delivered on time (i.e. in less than 30 min) and not delivered on time (i.e.
in more than 30 min). The contingency table for delivery time and operator looks
as follows:
Operator Total
Laura Melissa
min 163 151 314
(a) Calculate and interpret the odds ratio and its 95 % confidence interval.
Christian Heumann
Email: Christian.heumann@stat.uni-muenchen.de
10.1 Introduction
We introduced point and interval estimation of parameters in the previous
chapter. Sometimes, the research question is less ambitious in the sense that we
are not interested in precise estimates of a parameter, but we only want to
examine whether a statement about a parameter of interest or the research
hypothesis is true or not (although we will see later in this chapter that there is a
connection between confidence intervals and statistical tests, called duality).
Another related issue is that once an analyst estimates the parameters on the
basis of a random sample, (s)he would like to infer something about the value of
the parameter in the population. Statistical hypothesis tests facilitate the
comparison of estimated values with hypothetical values.
Example 10.1.1
As a simple example, consider the case where we want to find out whether the
proportion of votes for a party P in an election will exceed 30 % or not.
Typically, before the election, we will try to get representative data about the
election proportions for different parties (e.g. by telephone interviews) and then
make a statement like “yes”, we expect that P will get more than 30 % of the
votes or “no”, we do not have enough evidence that P will get more than 30 % of
the votes. In such a case, we will only know after the election whether our
statement was right or wrong. Note that the term representative data only means
that the sample is similar to the population with respect to the distributions of
some key variables, e.g. age, gender, and education. Since we use one sample to
compare it with a fixed value (30 %), we call it a one-sample problem .
Example 10.1.2
Consider another example in which a clinical study is conducted to compare the
effectiveness of a new drug (B) to an established standard drug (A) for a specific
disease, for example too high blood pressure. Assume that, as a first step, we
want to find out whether the new drug causes a higher reduction in blood
pressure than the already established older drug. A frequently used study design
for this question is a randomized (i.e. patients are randomly allocated to one of
the two treatments) controlled clinical trial (double blinded, i.e. neither the
patient nor the doctor know which of the drugs a patient is receiving during the
trial), conducted in a fixed time interval, say 3 months. A possible hypothesis is
that the average change in the blood pressure in group B is higher than in group
A, i.e. where and is the average blood pressure
at baseline before measuring the blood pressure again after 3 months ( ). Note
that we expect both the differences and to be positive, since otherwise we
would have some doubt that either drug is effective at all. As a second step (after
statistically proving our hypothesis), we are interested in whether the
improvement of B compared to A is relevant in a medical or biological sense and
is valid for the entire population or not. This will lead us again to the estimation
problems of the previous chapter, i.e. quantifying an effect using point and
interval estimation. Since we are comparing two drugs, we need to have two
samples from each of the drugs; hence, we have a two-sample problem . Since
the patients receiving A are different from those receiving B in this example, we
refer to it as a “two-independent-samples problem”.
Example 10.1.3
In another example, we consider an experiment in which a group of students
receives extra mathematical tuition. Their ability to solve mathematical problems
is evaluated before and after the extra tuition. We are interested in knowing
whether the ability to solve mathematical problems increases after the tuition, or
not. Since the same group of students is used in a pre–post experiment, this is
called a “two-dependent-samples problem” or a “paired data problem”.
10.2.2 Hypotheses
A researcher may have a research question for which the truth about the
population of interest is unknown. Suppose data can be obtained using a survey,
observation, or an experiment: if, given a prespecified uncertainty level, a
statistical test based on the data supports the hypothesis about the population, we
say that this hypothesis is statistically proven. Note that the research question
has to be operationalized before it can be tested by a statistical test. Consider the
drug Example 10.1.2: we want to examine whether the new drug B has a greater
blood pressure lowering effect than the standard drug A. We have several options
to operationalize this research question into a statistical set-up. One is to test
whether the average reduction (from baseline to 3 months) of the blood pressure
is higher (and positive) for drug B than drug A. We then state our hypotheses in
terms of expected values (i.e. ). Why do we have to use the expected values
and not simply compare the arithmetic means ? The reason is that the
superiority of B shown in the sample will only be valid for this sample and not
necessarily for another sample. We need to show the superiority of B in the
entire population, and hence, our hypothesis needs to reflect this. Another option
would be, for example, to use median changes in blood pressure values instead
of mean changes in blood pressure values. An important point is that the
research hypothesis which we want to prove has to be formulated as the
statistical alternative hypothesis , often denoted by . The reason for this will
become clearer later in this chapter. The opposite of the research hypothesis has
to be formulated as the statistical null hypothesis , denoted by . In the drug
example, the alternative and null hypotheses are, respectively,
and
We note that the two hypotheses are disjoint and the union of them covers all
possible differences of and . There is a boundary value ( ) which
separates the two hypotheses. Since we want to show the superiority of B, the
hypothesis was formulated as a one-sided hypothesis. Note that there are
different ways to formulate two-sample hypotheses; for example, is
equivalent to . In fact, it is very common to formulate two-sample
hypotheses as differences, which we will see later in this chapter.
Example 10.2.1
One-sample problems often test whether a target value is achieved or not. For
example, consider the null hypothesis as
average filling weight of packages of flour = 1 kg
average body height (men) = 178 cm.
The alternative hypothesis is formulated as deviation from the target
value. If deviations in both directions are interesting, then is formulated as a
two-sided hypothesis,
average body height (men) 178 cm.
If deviations in a specific direction are the subject of interest, then is
formulated as a one-sided hypothesis, for example,
average filling weight of flour packages is lower than 1 kg.
average filling weight of flour packages is greater than 1 kg.
Two-sample problems often examine differences of two samples. Suppose
the null hypothesis is related to the average weight of flour packages filled by
two machines, say 1 and 2. Then, the null hypothesis is
average weight of flour packages filled by machine 1 = average weight
of flour packages filled by machine 2.
Then, can be formulated as a one-sided or two-sided hypothesis. If we
want to prove that machine 1 and machine 2 have different filling weights, then
would be formulated as a two-sided hypothesis
average filling weight of machine 1 average filling weight of
machine 2.
If we want to prove that machine 1 has lower average filling weight than
machine 2, would be formulated as a one-sided hypothesis
average filling weight of machine 1 < average filling weight of
machine 2.
If we want to prove that machine 2 has lower filling weight than machine 1,
would be formulated as a one-sided hypothesis
average filling weight of machine 1 > average filling weight of
machine 2.
Remark 10.2.1
Note that we have not considered the following situation: ,
. In general, with the tests described in this chapter, we cannot prove the equality
of a parameter to a predefined value and neither can we prove the equality of
two parameters, as in , . We can, for example, not prove
(statistically) that machines 1 and 2 in the previous example provide equal filling
weight. This would lead to the more complex class of equivalence tests , which
is a topic beyond the scope of this book.
(1) Define the distributional assumptions for the random variables of interest,
and specify them in terms of population parameters (e.g. or and ).
This is necessary for parametric tests. There are other types of tests, so-
called nonparametric tests, where the assumptions can be relaxed in the
sense that we do not have to specify a particular distribution, see
Sect. 10.6ff. Moreover, for some tests the distributional assumptions can be
relaxed if the sample size is large.
(2) Formulate the null hypothesis and the alternative hypothesis as described in
Sects. 10.2.2 and 10.2.3.
(3) Fix a significance value (often called type I error) , for example ,
see also Sect. 10.2.4.
(5) Construct a critical region K for the statistic T, i.e. a region where—if T
falls in this region— is rejected, such that
The notation means that this inequality must hold for all parameter
values that belong to the null hypothesis . Since we assume that we
know the distribution of under , the critical region is defined by
those values of which are unlikely (i.e. with probability of less than
) to be observed under the null hypothesis. Note that although T(X) is a
random variable, K is a well-defined region, see Fig. 10.1 for an example.
(7) Decision rule: if t(x) falls into the critical region K, the null hypothesis
is rejected. The alternative hypothesis is then statistically proven. If t(x)
falls outside the critical region, is not rejected.
The next two paragraphs show how to arrive at the test decisions from step
7 in a different way. Readers interested in an example of a statistical test
may jump to Sect. 10.3.1 and possibly also Example 10.3.1.
Example 10.2.2
Assume that we are dealing with a two-sided test and assume further that the test
statistic T(x) is N(0, 1)-distributed under . The significance level is . If
we observe, for example, , then the p-value is . This can be
calculated in R as
The p-value is sometimes also called the significance, although we prefer the
term p-value. We use the term significance only in the context of a test result: a
test is (statistically) significant if (and only if) can be rejected.
Unfortunately, the p-value is often over-interpreted: both a test and the p-
value can only provide a yes/no decision: either is rejected or not.
Interpreting the p-value as the probability that the null hypothesis is true is
wrong! It is also incorrect to say that the p-value is the probability of making an
error during the test decision. In our (frequentist) context, hypotheses are true or
false and no probability is assigned to them. It can also be misleading to speak of
“highly significant” results if the p-value is very small. A last remark: the p-
value itself is a random variable: under the null hypothesis, it follows a uniform
distribution, i.e. .
see also Theorem 7.3.2. Note that follows a normal distribution even if
the s are not normally distributed and if n is large enough which follows from
the Central Limit Theorem (Appendix C.3). One can conclude that the
distributional assumption from step 1 is thus particularly important for small
samples, but not necessarily important for large samples. As a rule of thumb,
is considered to be a large sample. This rule is based on the knowledge
that a t-distribution with more than 30 degrees of freedom gets very close to a
N(0, 1)-distribution.
5. Critical region: Since the test statistic is N(0, 1)-distributed, we get
the following critical regions, depending on the hypothesis:
(a)
(b)
(c)
Fig. 10.2 Critical region of a one-sided one-sample Gauss test : versus : . The
7. Test decision: If the realized test statistic from step 6 falls into the critical
region, is rejected (and therefore, is statistically proven). Table 10.1
summarizes the test decisions depending on t(x) and the quantiles defining the
appropriate critical regions.
Table 10.1 Rules to make test decisions for the one-sample Gauss test (and the two-sample Gauss test, the
one-sample approximate binomial test, and the two-sample approximate binomial test—which are all
discussed later in this chapter)
Case Reject if
(a)
(b)
(c)
Example 10.3.1
A bakery supplies loaves of bread to supermarkets. The stated selling weight
(and therefore the required minimum expected weight) is kg. However, not
every package weighs exactly 2 kg because there is variability in the weights. It
is therefore important to find out if the average weight of the loaves is
significantly smaller than 2 kg. The weight X (measured in kg) of the loaves is
assumed to be normally distributed. We assume that the variance is
known from experience. A supermarket draws a sample of loaves and
weighs them. The average weight is calculated as kg. Since the
supermarket wants to be sure that the weights are, on average, not lower than
2 kg, a one-sided hypothesis is appropriate and is formulated as :
kg versus : kg. The significance level is specified as , and
therefore, . The test statistic is calculated as
Remark 10.3.1
The Gauss test assumes the variance to be known, which is often not the case in
practice. The t-test (Sect. 10.3.2) assumes that the variance needs to be
estimated. The t-test is therefore commonly employed when testing hypotheses
about the mean. Its usage is outlined below. In R, the command Gauss.test
from the library compositions offers an implementation of the Gauss test.
(a)
(b)
(c)
falls into the critical region. The critical regions are based on the appropriate
quantiles of the t-distribution with degrees of freedom, as outlined in
Table 10.2.
Table 10.2 Rules to make test decisions for the one-sample t-test (and the two-sample t-test, and the paired
t-test, both explained below)
Case Reject , if
(a)
(b)
(c)
Example 10.3.2
We again consider Example 10.3.1. Now we assume that the variance of the
loaves is unknown. Suppose a random sample of size has an arithmetic
mean of and a sample variance of . We want to test
whether this result contradicts the two-sided hypothesis : , that is case
(a). The significance level is fixed at . For the realized test statistic t(x),
we calculate
is not rejected since , where the quantiles are
defining the critical region (see Table C.2 or use R: qt(0.975,19)). The
same results can be obtained in R using the t.test() function, see Example
10.3.3 for more details. Or, we can directly calculate the (two-sided) p-value as
and
(10.1)
(10.2)
The test procedure is identical to the procedure of the one-sample Gauss test
introduced in Sect. 10.3.1; that is, the test decision is based on Table 10.1.
Case 2: The variances are unknown, but equal (two-sample t -test).
We denote the unknown variance of both distributions as (i.e. both the
populations are assumed to have variance ). We estimate by using the
pooled sample variance where each sample is assigned weights relative to the
sample size:
(10.3)
The test statistic
(10.4)
with S as in (10.3) follows a t-distribution with degrees of freedom
if is true. The realized test statistic is
(10.5)
The test procedure is identical to the procedure of the one-sample t-test; that
is, the test decision is based on Table 10.2.
Case 3: The variances are unknown and unequal (Welch test).
We test : versus : given and both and
are unknown. This problem is also known as the Behrens–Fisher problem and is
the most frequently used test when comparing two means in practice. The test
statistic can be written as
(10.6)
(10.7)
(10.8)
is identical to the procedure of the one-sample t-test; that is, the test decision
is based on Table 10.2 except that the degrees of freedom are not but v. If v
is not an integer, it can be rounded off to an integer value.
Example 10.3.3
A small bakery sells cookies in packages of 500 g. The cookies are handmade
and the packaging is either done by the baker himself or his wife. Some
customers conjecture that the wife is more generous than the baker. One
customer does an experiment: he buys packages of cookies packed by the baker
and his wife on 16 different days and weighs the packages. He gets the following
two samples (one for the baker, one for his wife).
Weight (wife) (X) 512 530 498 540 521 528 505 523
Weight (baker) (Y) 499 500 510 495 515 503 490 511
i.e. we only want to test whether the weights are different, not that the wife is
making heavier cookie packages. Since the variances are unknown, we assume
that case 3 is the right choice. We calculate and obtain , ,
, and . The test statistic is:
The test statistic remains the same but the critical region and the degrees of
freedom change. Thus, is rejected if . Using
and t(x, y) = 2.91, it follows that the null hypothesis can be rejected. The mean
weight of the wife’s packages is greater than the mean weight of the baker’s
packages.
In R, we would have obtained the same result using the t.test command:
Note that we have to specify the alternative hypothesis under the option
alter native. The output shows us the test statistic (2.9058), the degrees of
freedom (11.672), the alternative hypothesis—but not the decision rule. We
know that is rejected if , so the decision is easy in this case: we
simply have to calculate using qt(0.95,12) in R. A simpler way to
arrive at the same decision is to use the p-value. We know that is rejected if
which is the case in this example. It is also worthwhile mentioning that R
displays the hypotheses slightly differently from ours: our alternative hypothesis
is which is identical to the statement , as shown by R, see also
Sect. 10.2.2.
If we specify two.sided as an alternative (which is the default), a
confidence interval for the mean difference is also part of the output:
It can be seen that the confidence interval of the difference does not cover
the “0”. Therefore, the null hypothesis is rejected. This is the duality property
referred to earlier in this chapter: the test decision is the same, no matter whether
one evaluates (i) the confidence interval, (ii) the test statistic, or (iii) the p-value.
Any kind of t-test can be calculated with the t.test command: for
example, the two-sample t-test requires to specify the option
var.equal=TRUE while the Welch test is calculated when the (default) option
var.equal=FALSE is set. We can also conduct a one-sample t-test. Suppose
we are interested in whether the mean weight of the wife’s packages of cookies
is greater than 500 g; then, we could test the hypotheses:
(10.9)
is t-distributed with degrees of freedom. The sample mean is
and the sample variance is
Example 10.3.4
In an experiment, students have to solve different tasks before and after
drinking a cup of coffee. Let Y and X denote the random variables “number of
points before/after drinking a cup of coffee”. Assume that a higher number of
points means that the student is performing better. Since the test is repeated on
the same students, we have a paired sample. The data is given in the following
table:
i (before) (after)
1 4 5 1 0
2 3 4 1 0
3 5 6 1 0
4 6 7 1 0
5 7 8 1 0
6 6 7 1 0
7 4 5 1 0
8 7 8 1 0
9 6 5 −1 4
10 2 5 3 4
Total 10 8
We calculate
We can make the test decision using the R output in three different ways:
(i) We compare the test statistic ( ) with the critical value (1.83,
obtained via qt(0.95,9)).
(iii) We evaluate whether the confidence interval for the mean difference
covers “0” or not.
In the following, we describe two possible solutions, one exact approach and
an approximate solution. The approximate solution is based on the
approximation of the binomial distribution by the normal distribution, which is
appropriate if n is sufficiently large and the condition holds (i.e. p is
neither too small nor too large). First, we present the approximate solution and
then the exact one.
Test statistic and test decisions.
(10.11)
It holds approximately that , given that the conditions that (i)
n is sufficiently large and (ii) are satisfied. The test can then
be conducted along the lines of the Gauss test in Sect. 10.3.1; that is, the
test decision is based on Table 10.1.
Example 10.4.1
We return to Example 10.1.1. Let us assume that a representative sample of size
has been drawn from the population of eligible voters, from which 700
(35 %) have voted for the party of interest P. The research hypothesis (which has
to be stated as ) is that more than 30 % (i.e. ) of the eligible voters cast
their votes for party P. The sample is in favour of because , but to
draw conclusions for the proportion of voters of party P in the population, we
have to conduct a binomial test. Since n is large and
, the assumptions for the use of the test
statistic (10.11) are satisfied. We can write down the realized test statistic as
(b) The exact binomial test can be constructed using the knowledge that under
, (i.e. the number of successes) follows a binomial
distribution. In fact, we can use Y directly as the test statistic:
The observed test statistic is . For the two-sided case (a), the two
critical numbers and ( ) which define the critical region, have to
be found such that
The null hypothesis is rejected if the test statistic, i.e. Y, is greater than or
equal to or less than or equal to . For the one-sided case, a critical
number c has to be found such that
for hypotheses of type (c). If Y is less than the critical value c (for case (b))
or greater than the critical value (for case (c)), the null hypothesis is
rejected.
Example 10.4.2
We consider again Example 10.1.1 where we looked at the population of eligible
voters, from which 700 (35 %) have voted for the party of interest P. The
observed test statistic is and the alternative hypothesis is
, as in case (c). There are at least two ways in which we can obtain
the results:
(i) Long way: We can calculate the test statistic and compare it to the critical
region. To get the critical region, we search c such that
which equates to
(ii) Short way: The above result can be easily obtained in R using the binom.
test() command. We need to specify the number of “successes” (here:
700), the number of “failures” , and the alternative
hypothesis:
The sums
Similar to the one-sample case, both exact and approximate tests exist. Here,
we only present the approximate test. The exact test of Fisher is presented in
Appendix C.5, p. 428. Let and denote the sample sizes. Then, and
are approximately normally distributed:
Their difference D
(10.13)
follows a N(0, 1)-distribution if and are sufficiently large and p is not
near the boundaries 0 and 1 (one could use, for example, again the condition
with ). The realized test statistic can be calculated using
the observed difference . The test can be conducted for the one-sided
and the two-sided case as the Gauss test introduced in Sect. 10.3.1; that is, the
decision rules from Table 10.1 can be applied.
Example 10.4.3
Two competing lotteries claim that every fourth lottery ticket wins. Suppose we
want to test whether the probabilities of winning are different for the two
lotteries, i.e. and . We have the following data
We can estimate the probabilities of a winning ticket for each lottery, as well
as the respective difference, as
The null hypothesis can be interpreted in the following way: the probability
that a randomly drawn observation from the first population has a value x that is
greater (or lower) than the value y of a randomly drawn subject from the second
population is . The alternative hypothesis is then
This means we are comparing the entire distribution of two variables. If there
is a location shift in the sense that one distribution is shifted left (or right)
compared with the other distribution, the null hypothesis will be rejected because
this shift can be seen as part of the alternative hypothesis . In
fact, under some assumptions, the hypothesis can even be interpreted as
comparing two medians, and this is what is often done in practice.
Observed test statistic.
To construct the test statistic, it is necessary to merge and
into one sorted sample, usually in ascending order, while keeping
the information which value belongs to which sample. For now, we assume that
all values of the two samples are distinct; that is, no ties are present. Then, each
observation has a rank between 1 and . Let be the sum of ranks of
the x-sample and let be the sum of ranks of the y-sample. The test statistic is
defined as U, where U is the minimum of the two values , ,
with
(10.14)
(10.15)
Test decision.
is rejected if . Here, is the critical value derived from the
distribution of U under the null hypothesis. The exact (complex) distribution
can, for example, be derived computationally (in R). We are presenting an
approximate solution together with its implementation in R.
Since , it is sufficient to compute only and
( or are chosen such that is calculated for the
sample with the lower sample size). For , one can use the approximation
(10.16)
Example 10.6.1
In a study, the reaction times (in seconds) to a stimulus were measured for two
groups. One group drank a strong coffee before the stimulus and the other group
drank only the same amount of water. There were 9 study participants in the
coffee group and 10 participants in the water group. The following reaction
times were recorded:
Reaction time 1 2 3 4 5 6 7 8 9 10
Coffee group (C) 3.7 4.9 5.2 6.3 7.4 4.4 5.3 1.7 2.9
Water group (W) 4.5 5.1 6.2 7.3 8.7 4.2 3.3 8.9 2.6 4.8
We test with the U-test whether there is a location difference between the
two groups. First, the ranks of the combined sample are calculated as:
1 2 3 4 5 6 7 8 9 10 Total
Value (C) 3.7 4.9 5.2 6.3 7.4 4.4 5.3 1.7 2.9
Rank (C) 5 10 12 15 17 7 13 1 3 83
Value (W) 4.5 5.1 6.2 7.3 8.7 4.2 3.3 8.9 2.6 4.8
Rank (W) 8 11 14 16 18 6 4 19 2 9 107
With and ,
Example 10.7.1
Consider an experiment where a die is rolled times. Under the null
hypothesis , we assume that the die is fair, i.e. , where
. We could have also said that is the hypothesis that the rolls are
following a discrete uniform distribution. Thus, the expected absolute
frequencies under are , while the observed frequencies in the
sample are . The generally deviate from . The -statistic is
based on the squared differences, , and becomes large as the
differences between the observed and the expected frequencies become larger.
The -test statistic is a modification of this sum by scaling each squared
difference by the expected frequencies, , and is explained below.
With a nominal variable, we can proceed as in Example 10.7.1. If the scale of the
variable is ordinal or continuous, the number of different values can be large.
Note that in the most extreme case, we can have as many different values as
observations (n), leading to for all . Then, it is necessary to
group the data into k intervals before applying the -test. The reason is that the
general theory of the -test assumes that the number k (which was 6 in Example
10.7.1 above) is fixed and does not grow with the number of observations n; that
is, the theory says that the -test only works properly if k is fixed and n is large.
For this reason, we group the sample into k classes as shown
in Sect. 2.1.
Class 1 2 k Total
Number of observations n
Test statistic.
The test statistic is defined as
(10.17)
Here,
( ) are the absolute frequencies of observations of the sample
in class i, is a random variable with realization in the observed
sample;
( ) are calculated from the distribution under , , and
are the (hypothetical) probabilities that an observation of X falls in class i;
are the expected absolute frequencies in class i under .
Test decision.
For a significance level , is rejected if t(x) is greater than the -
quantile of the -distribution with degrees of freedom, i.e. if
Example 10.7.2
Let be the distribution function of the test distribution. If one specifies a
normal distribution such as , or a discrete uniform distribution
with ( ), then , since no parameters have to be estimated
from the data. Otherwise, if we simply want to test whether the data is generated
from a normal distribution or the data follows a normal distribution
, then and may be estimated from the sample by and . Then,
and the number of degrees of freedom is reduced.
Example 10.7.3
Gregor Mendel (1822–1884) conducted crossing experiments with pea plants of
different shape and colour. Let us look at the outcome of a pea crossing
experiment with the following results:
Mendel had the hypothesis that the four different types occur in proportions
of 9:3:3:1, that is
The hypotheses are
1 315 312.75
2 108 104.25
3 101 104.25
4 32 34.75
Y
1 2 J
X 1
Remember that
is the row sum,
is the column sum, and
n is the total number of observations.
The hypotheses are : X and Y are independent versus and Y are not
independent. If X and Y are independent, then the expected frequencies are
(10.18)
Test statistic.
Pearson’s -test statistic was introduced in Chap. 4, Eq. (4.6). It is
(10.19)
Test decision.
The number of degrees of freedom under is , where are
the parameters which have to be estimated for the marginal distribution of X, and
are the number of parameters for the marginal distribution of Y. The test
decision is:
Example 10.8.1
Consider the following contingency table. Here, X describes the educational
level (1: primary, 2: secondary, 3: tertiary) and Y the preference for a specific
political party (1: Party A, 2: Party B, 3: Party C). Our null hypothesis is that the
two variables are independent, and we want to show the alternative hypothesis
which says that there is a relationship between them.
Y Total
1 2 3
X 1 100 200 300 600
2 100 100 100 300
3 80 10 10 100
Total 280 310 410 1000
Y
1 2 3
X 1 168 186 246
2 84 93 123
3 28 31 41
Since , is rejected.
In R, either the summarized data (as shown below) can be used to calculate
the test statistic or the raw data (summarized in a contingency table via
table(var1,var2)):
The output is
For a binary outcome, the -test of independence can be formulated as a test for
the null hypothesis that the proportions of the binary variable are equal in several
( ) groups, i.e. for a (or ) table. This test is called the -test of
homogeneity.
Example 10.8.2
Consider two variables X and Y, where X is describing the rating of a coffee
brand with the categories “bad taste” and “good taste” and Y denotes three age
subgroups, e.g. “18–25”, “25–35”, and “35–45”. The observed data is
Y
18–25 25–35 35–45 Total
X Bad 10 30 65 105
Good 90 70 35 195
Total 100 100 100 300
The results (test statistic, p-value) are identical and is rejected. Note that
prop.test strictly expects a table (i.e. exactly 2 columns).
Remark 10.8.1
For -tables with small sample sizes and therefore small cell frequencies, it
is recommended to use the exact test of Fisher as described in Appendix C.5.
Remark 10.8.2
The test described in Example 10.8.2 is a special case (since one variable is
binary) of the general -test of homogeneity . The -test of homogeneity is
valid for any table, where K is the number of subgroups of a variable Y
and C is the number of values of the outcome X of interest. The null hypothesis
assumes that the conditional distributions of X given Y are identical in all
subgroups, i.e.
Exercise 10.2
A producer of chocolate bars hypothesizes that his production does not adhere to
the weight standard of 100 g. As a measure of quality control, he weighs 15 bars
and obtains the following results in grams:
96.40, 97.64, 98.48, 97.67, 100.11, 95.29, 99.80, 98.80, 100.53, 99.41, 97.64,
101.11, 93.43, 96.99, 97.92
It is assumed that the production process is standardized in the sense that the
variation is controlled to be .
(a) What are the hypotheses regarding the expected weight for a two-sided
test?
(c) Conduct the test that was suggested to be used in (b). Use .
(d) The producer wants to show that the expected weight is smaller than 100 g.
What are the appropriate hypotheses to use?
(e) Conduct the test for the hypothesis in (d). Again use .
Exercise 10.3
Christian decides to purchase the new CD by Bruce Springsteen. His first
thought is to buy it online, via an online auction. He discovers that he can also
buy the CD immediately, without bidding at an auction, from the same online
store. He also looks at the price at an internet book store which was
recommended to him by a friend. He notes down the following prices (in €):
Internet book store 16.95
Online store, no auction 18.19, 16.98, 19.97, 16.98, 18.19, 15.99, 13.79,
15.90, 15.90, 15.90, 15.90, 15.90, 19.97, 17.72
Online store, auction 10.50, 12.00, 9.54, 10.55, 11.99, 9.30, 10.59, 10.50,
10.01, 11.89, 11.03, 9.52, 15.49, 11.02
(a) Calculate and interpret the arithmetic mean, variance, standard deviation,
and coefficient of variation for the online store, both for the auction and
non-auction offers.
(b) Test the hypothesis that the mean price at the online store (no auction) is
unequal to €16.95 ( ).
(c) Calculate a confidence interval for the mean price at the online store (no
auction) and interpret your findings in the light of the hypothesis in (b).
(d) Test the hypothesis that the mean price at the online store (auction) is less
than €16.95 ( ).
(e) Test the hypothesis that the mean non-auction price is higher than the mean
auction price. Assume that (i) the variances are equal in both samples and
(ii) the variances are unequal ( ).
(f) Test the hypothesis that the variance of the non-auction price is unequal to
the variance of the auction price ( ).
(g) Use the U-test to compare the location of the auction and non-auction
prices. Compare the results with those of (e).
Exercise 10.4
Ten of Leonard’s best friends try a new diet: the “Banting” diet. Each of them
weighs him/herself before and after the diet. The data is as follows:
Person (i) 1 2 3 4 5 6 7 8 9 10
Before diet ( ) 80 95 70 82 71 70 120 105 111 90
Exercise 10.5
A company producing clothing often finds deficient T-shirts among its
production.
(b) Test the same hypothesis as in (a) using the exact binomial test. You can
use R to determine the quantiles needed for the calculation.
(c) The company is offered a new cutting machine. To test whether the change
of machine helps to improve the production quality, 115 sample shirts are
evaluated, 7 of which have deficiencies. Use the two-sample binomial test
to decide whether the new machine yields improvement or not ( ).
(d) Test the same hypothesis as in (c) using the test of Fisher in R.
Exercise 10.6
Two friends play a computer game and each of them repeats the same level 10
times. The scores obtained are:
1 2 3 4 5 6 7 8 9 10
Player 1 91 101 112 99 108 88 99 105 111 104
Player 2 261 47 40 29 64 6 87 47 98 351
(a) Player 2 insists that he is the better player and suggests to compare their
mean performance. Use an appropriate test ( ) to test this
hypothesis.
(b) Player 1 insists that he is the better player. He proposes to not focus on the
mean and to use the U-test for comparison. What are the advantages and
disadvantages of using this test compared with (a)? What are the results (
)?
Exercise 10.7
Otto loves gummy bears and buys 10 packets at a factory store. He opens all
packets and sorts them by their colour. He counts 222 white gummy bears, 279
red gummy bears, 251 orange gummy bears, 232 yellow gummy bears, and 266
green ones. He is disappointed since white (pineapple flavour) is his favourite
flavour. He hypothesizes that the producer of the bears does not uniformly
distribute the bears into the packets. Choose an appropriate test to find out
whether Otto’s speculation could be true.
Exercise 10.8
We consider Exercise 4.4 where we evaluated which of the passengers from the
Titanic were rescued. The data was summarized as follows:
(a) The hypothesis derived from the descriptive analysis was that travel class
and rescue status are not independent. Test this hypothesis.
(c) Summarize the data in a 2 table: passengers from the first and second
class should be grouped together, and third class passengers and staff
should be grouped together as well. Is the probability of being rescued
higher in the first and second class? Provide an answer using the following
three tests: exact test of Fisher, -independence test, and -homogeneity
test. You can use R to conduct the test of Fisher.
Exercise 10.9
We are interested in understanding how well the t-test can detect differences with
respect to the mean. We use R to draw 3 samples each of 20 observations from
three different normal distributions: , , and .
The summary statistics of this experiment are as follows:
, ,
, ,
, .
Exercise 10.10
Access the theatre data described in Appendix A.4. The data summarizes a
survey conducted on visitors of a local Swiss theatre in terms of age, sex, annual
income, general expenditure on cultural activities, expenditure on theatre visits,
and the estimated expenditure on theatre visits in the year before the survey was
done.
(a) Compare the mean expenditure on cultural activities for men and women
using the Welch test ( ).
(b) Would the conclusions change if the two-sample t-test or the U-test were
used for comparison?
(c) Test the hypothesis that women spend on average more money on theatre
visits than men ( ).
(d) Compare the mean expenditure on theatre visits in the year of the survey
and the preceding year ( ).
Exercise 10.11
Use R to read in and analyse the pizza data described in Appendix A.4 (assume
).
(a) The manager’s aim is to deliver pizzas in less than 30 min and with a
temperature of greater than 65 C. Use an appropriate test to evaluate
whether these aims have been reached on average.
(b) If it takes longer than 40 min to deliver the pizza, then the customers are
promised a free bottle of wine. This offer is only profitable if less than
15 % of deliveries are too late. Test the hypothesis .
(c) The manager wonders whether there is any relationship between the
operator taking the phone call and the pizza temperature. Assume that a hot
pizza is defined to be one with a temperature greater 65 C. Use the test of
Fisher, the -independence test, and the -test of homogeneity to test his
hypothesis.
(d) Each branch employs the same number of staff. It would thus be desirable
if each branch receives the same number of orders. Use an appropriate test
to investigate this hypothesis.
(e) Is the proportion of calls taken by each operator the same in each branch?
Exercise 10.12
The authors of this book went to visit historical sites in India. None of them has
a particularly strong interest in photography, and they speculated that each of
them would take about the same number of pictures on their trip. After returning
home, they counted 110, 118, and 105 pictures, respectively. Use an appropriate
test to find out whether their speculation was correct ( ).
Source Toutenburg, H., Heumann, C., Induktive Statistik, 4th edition, 2007,
Springer, Heidelberg
© Springer International Publishing Switzerland 2016
Christian Heumann, Michael Schomaker and Shalabh, Introduction to Statistics and Data Analysis ,
DOI 10.1007/978-3-319-46162-5_11
Christian Heumann
Email: Christian.heumann@stat.uni-muenchen.de
Examples
Examples of associations in which we might be interested in are:
body height (X) and body weight (Y) of persons,
speed (X) and braking distance (Y) measured on cars,
money invested (in €) in the marketing of a product (X) and sales figures for
this product (in €) (Y) measured in various branches,
amount of fertilizer used (X) and yield of rice (Y) measured on different
acres, and
temperature (X) and hotel occupation (Y) measured in cities.
The method of least squares says that a line can be fitted to the given data
set such that the errors are minimized. This implies that one can determine and
such that the sum of the squared distances between the data points and the line
is minimized. For example, in Fig. 11.2, the first data point
does not lie on the plotted line and the deviation is . Similarly,
we obtain the difference of the other three data points from the line: . The
error is zero if the point lies exactly on the line. The problem we would like to
solve in this example is to minimize the sum of squares of , and , i.e.
(11.4)
We want the line to fit the data well. This can generally be achieved by
choosing and such that the squared errors of all the n observations are
minimized:
(11.5)
If we solve this optimization problem by the principle of maxima and
minima, we obtain estimates of and as
(11.6)
see Appendix C.6 for a detailed derivation. Here, and represent the
estimates of the parameters and , respectively, and are called the least
squares estimator of and , respectively. This gives us the model
which is called the fitted model or the fitted regression line . The literal meaning
of “regression” is to move back. Since we are acquiring the data and then
moving back to find the parameters of the model using the data, it is called a
regression model. The fitted regression line describes the postulated
relationship between Y and X. The sign of determines whether the relationship
between X and Y is positive or negative. If the sign of is positive, it indicates
that if X increases, then Y increases too. On the other hand, if the sign of is
negative, it indicates that if X increases, then Y decreases. For any given value of
X, say , the predicted value is calculated by
Example 11.2.1
A physiotherapist advises 12 of his patients, all of whom had the same knee
surgery done, to regularly perform a set of exercises. He asks them to record
how long they practise. He then summarizes the average time they practised (X,
time in minutes) and how long it takes them to regain their full range of motion
again (Y, time in days). The results are as follows:
i 1 2 3 4 5 6 7 8 9 10 11 12
24 35 64 20 33 27 42 41 22 50 36 31
90 65 30 60 60 80 45 45 80 35 50 45
Using (11.6), we can now easily find the least squares estimates and as
The fitted regression line is therefore
We can draw the regression line onto a scatter plot using the command
abline, see also Fig. 11.3.
Fig. 11.3 Scatter plot and regression line for Example 11.2.1
(ii) For the points , forming the regression line, we can write
(11.8)
(iii) It follows for that , i.e. the point always lies on the
regression line. The linear regression line therefore always passes through
.
(iv) The sum of the residuals is zero. The ith residual is defined as
(11.9)
(11.10)
(vi) The least squares estimate has a direct relationship with the correlation
coefficient of Bravais–Pearson:
(11.11)
The slope of the regression line is therefore proportional to the correlation
coefficient r: a positive correlation coefficient implies a positive estimate
of and vice versa. However, a stronger correlation does not necessarily
imply a steeper slope in the regression analysis because the slope depends
upon as well.
(11.12)
(11.13)
It follows from the above definition that . The closer is to 1, the
better the fit because will then be small. The closer is to 0, the worse
the fit, because will then be large. If takes any other value, say
, it means that only 70 % of the variation in data is explained by the
fitted model, and hence, in simple language, the model is 70 % good. An
important point to remember is that is defined only when there is an intercept
term in the model (an assumption we make throughout this chapter). So it is not
used to measure the goodness of fit in models without an intercept term.
Example 11.3.1
Consider again Fig. 11.1: In Fig. 11.1a, the line and data points fit well together.
As a consequence is high, . Figure 11.1b shows data points with
large deviations from the regression line; therefore, is small, here .
Similarly, in Fig. 11.4a, an of 0.97 relates to an almost perfect model fit,
whereas in Fig. 11.4b, the model describes the data only moderately well (
).
Example 11.3.2
Consider again Example 11.2.1 where we analysed the relationship between
exercise intensity and recovery time for patients convalescing from knee surgery.
To calculate , we need the following table:
We conclude that the regression model provides a reasonable but not perfect
fit to the data because 0.66 is not close to 0, but also not very close to 1. About
66 % of the variability in the data can be explained by the fitted linear regression
model. The rest is random error: for example, individual variation in the
recovery time of patients due to genetic and environmental factors, surgeon
performance, and others.
We can also obtain this result in R by looking at the summary of the linear
model:
(11.14)
The proof is given in Appendix C.6.
Example 11.3.3
Consider again Examples 11.3 and 11.5 where we analysed the association of
exercising time and time to regain full range of motion after knee surgery. We
calculated . We therefore know that the correlation coefficient is
.
Example 11.4.1
Recall Examples 11.2.1, 11.3.2, and 11.3.3 where we analysed the association of
exercising time and recovery time after knee surgery. We keep the values of Y
(recovery time, in days) and replace values of X (exercising time, in minutes)
with 0 for patients exercising for less than 40 min and with 1 for patients
exercising for 40 min or more. We have therefore a new variable X indicating
whether a patient is exercising a lot ( ) or not ( ). To estimate the linear
regression line , we first calculate and . To obtain
and , we need the following table:
We can now calculate the least squares estimates of and using (11.6) as
Fig. 11.5 Scatter plot and regression lines for Examples 11.4.1 and 11.5.1
is not a linear model because the right-hand side of the equation is not a linear
function in . However,
is a linear model. This model can be fitted as for any other linear model: we
obtain and as usual and simply use the squared values of X instead of X.
This can be justified by considering , and then, the model is written as
which is again a linear model. Only the interpretation changes:
for each unit increase in the squared value of X, i.e. increases by units.
Such an interpretation is often not even needed. One could simply plot the
regression line of Y on to visualize the functional form of the effect. This is
also highlighted in the following example.
Example 11.5.1
Recall Examples 11.2.1, 11.3.2, 11.3.3, and 11.4.1 where we analysed the
association of exercising time and recovery time after knee surgery. We
estimated as by using X as it is, as a continuous variable, see also
Fig. 11.3. When using a binary X, based on a cut-off of 40 min, we obtained
, see also Fig. 11.5. If we now use rather than X, we obtain
. This means for an increase of 1 unit of the square root of exercising
time, the recovery time decreases by 15.1 days. Such an interpretation will be
difficult to understand for many people. It is better to plot the linear regression
line , see Fig. 11.5. We can see that the new nonlinear line
(obtained from a linear model) fits the data nicely and it may even be preferable
to a straight regression line. Moreover, the value of substantially increased
with this modelling approach, highlighting that no exercising at all may severely
delay recovery from the surgery (which is biologically more meaningful). In R,
we obtain these results by either creating a new variable or by using the I()
command which allows specifying transformations in regression models.
(11.15)
Note that the intercept term is denoted here by . In comparison with (11.2),
and .
Example 11.6.1
For the pizza delivery data, a particular linear model with multiple covariates
could be specified as follows:
(11.16)
The letters are written in bold because they refer to vectors and matrices
rather than scalars. The capital letter makes it clear that is a matrix of order
representing the n observations on each of the covariates .
Similarly, is a vector of n observations on is a vector of
regression coefficients associated with , and is a vector of n
errors. The lower case letter relates to a vector representing a variable which
means we can denote the multiple linear model from now on also as
(11.18)
where is the vector of 1’s. We assume that and (see
Sect. 11.9 for more details).
We would like to highlight that is not the data matrix. The matrix is
called the design matrix and contains both a column of 1’s denoting the
presence of the intercept term and all explanatory variables which are relevant to
the multiple linear model (including possible transformations and interactions,
see Sects. 11.6.3 and 11.7.3). The errors reflect the deviations of the
observations from the regression line and therefore the difference between the
observed and fitted relationships. Such deviations may occur for various reasons.
For example, the measured values can be affected by variables which are not
included in the model, the variables may not be accurately measured, there is
unmeasured genetic variability in the study population, and all of which are
covered in the error . The estimate of is obtained by using the least squares
principle by minimizing . The least squares estimate of is given
by
(11.19)
The vector contains the estimates of . We can interpret it as
earlier: is the estimate of the intercept which we obtain because we have
added the column of 1’s (and is identical to in (11.1)). The estimates
refer to the regression coefficients associated with the variables
, respectively. The interpretation of is that it represents the partial
change in when the value of changes by one unit keeping all other
covariates fixed.
A possible interpretation of the intercept term is that when all covariates
equal zero then
(11.20)
There are many situations in real life for which there is no meaningful
interpretation of the intercept term because one or many covariates cannot take
the value zero. For instance, in the pizza data set, the bill can never be €0, and
thus, there is no need to describe the average delivery time for a given bill of €0.
The intercept term here serves the purpose of improvement of the model fit, but
it will not be interpreted.
In some situations, it may happen that the average value of y is zero when all
covariates are zero. Then, the intercept term is zero as well and does not improve
the model. For example, suppose the salary of a person depends on two factors
—education level and type of qualification. If any person is completely illiterate,
even then we observe in practice that his salary is never zero. So in this case,
there is a benefit of the intercept term. In another example, consider that the
velocity of a car depends on two variables—acceleration and quantity of petrol.
If these two variables take values of zero, the velocity is zero on a plane surface.
The intercept term will therefore be zero as well and yields no model
improvement.
Example 11.6.2
Consider the pizza data described in Appendix A.4. The data matrix is as
follows:
Suppose the manager has the hypothesis that the operator and the overall bill (as
a proxy for the amount ordered from the customer) influence the delivery time.
We can postulate a linear model to describe this relationship as
The model in matrix notation is as follows:
Examples
Region: East, West, South, North,
Marital status: single, married, divorced, widowed,
Day: Monday, Tuesday, , Sunday.
(11.21)
The category for which we do not create a dummy variable is called the
reference category , and the interpretation of the parameters of the dummy
variables is with respect to this category. For example, for category i, the -
values are on average higher than for the reference category. The concept of
creating dummy variables and interpreting them is explained in the following
example.
Example 11.6.3
Consider again the pizza data set described in Appendix A.4. The manager may
hypothesize that the delivery times vary with respect to the branch. There are
branches: East, West, Centre. Instead of using the variable , we
create ( ), i.e. new variables denoting and . We
set for those deliveries which come from the branch in the East and set
for other deliveries. Similarly, we set for those deliveries which
come from the West and for other deliveries. The data then is as follows:
Deliveries which come from the East have and , deliveries which
come from the West have and , and deliveries from the Centre have
and . The regression model of interest is thus
Consider now a covariate with categories for which we create two new
dummy variables, and . The linear model is .
For , we obtain ;
For , we obtain ;
and
For the reference category ( ), we obtain
.
Remark 11.6.1
There are other ways of recoding categorical variables, such as effect coding .
However, we do not describe them in this book.
11.6.3 Transformations
As we have already outlined in Sect. 11.5, if is transformed by a function
then
(11.22)
is still a linear model because the model is linear in its parameters . Popular
transformations are , among others. The choice of such a
function is typically motivated by the application and data at hand. Alternatively,
a relationship between and can be modelled via a polynomial as follows:
(11.23)
Example 11.6.4
Consider again Examples 11.2.1, 11.3.2, 11.3.3, 11.4.1, and 11.5.1 where we
analysed the association of intensity of exercising and recovery time after knee
surgery. The linear regression line was estimated as
One could question whether the association is indeed linear and fit the second-
and third-order polynomials:
Fig. 11.6 Scatter plot and regression lines for Example 11.6.4
We see that the regression based on the quadratic polynomial visually fits the
data slightly better than the linear polynomial. It seems as if the relation between
recovery time and exercise time is not exactly linear and the association is better
modelled through a second-order polynomial. The regression line based on the
cubic polynomial seems to be even closer to the measured points; however, the
functional form of the association looks questionable. Driven by a single data
point, we obtain a regression line which suggests a heavily increased recovery
time for exercising times greater than 65 min. While it is possible that too much
exercising causes damage to the knee and delays recovery, the data does not
seem to be clear enough to support the shape suggested by the model. This
example illustrates the trade-off between fit (i.e. how good the regression line
fits the data) and parsimony (i.e. how well a model can be understood if it
becomes more complex). Section 11.8 explores this non-trivial issue in more
detail.
(11.24)
where is the vector of outcomes and is the design matrix
(including a column of 1’s for the intercept). The identity matrix consists of 1’s
on the diagonal and 0’s elsewhere, and the parameter vector is .
We would like to estimate to make conclusions about a relationship in the
population. Similarly, reflects the random errors in the population. Most
importantly, the linear model now contains assumptions about the errors. They
are assumed to be normally distributed, , which means that the
expectation of the errors is , the variance is (and therefore
the same for all ), and it follows from that for all
. The assumption of a normal distribution is required to construct
confidence intervals and test of hypotheses statistics. More details about these
assumptions and the procedures to test their validity on the basis of a given
sample of data are explained in Sect. 11.9.
The least squares estimate of is obtained by
(11.25)
It can be shown that follow a normal distribution with mean and
covariance matrix as
(11.26)
Note that is unbiased (since ); more details about (11.26) can be
found in Appendix C.6. An unbiased estimator of is
(11.27)
The errors are estimated from the data as and are called residuals
.
Before giving a detailed data example, we would like to outline how to draw
conclusions about the population of interest from the linear model. As we have
seen, both and are unknown in the model and are estimated from the data
using (11.25) and (11.27). These are our point estimates. We note that if ,
then , and then, the model will not contain the term . This means that
the covariate does not contribute to explaining the variation in . Testing the
hypothesis is therefore equivalent to testing whether is associated with
or not in the sense that it helps to explain the variations in or not. To test
whether the point estimate is different from zero, or whether the deviations of
estimates from zero can be explained by random variation, we have the
following options:
(11.28)
with where is the jth diagonal element of the matrix .
If the confidence interval does not overlap 0, then we can conclude that is
different from 0 and therefore is associated with and it is a relevant
variable. If the confidence interval includes 0, we cannot conclude that there
is an association between and .
3. The decisions we get from the confidence interval and the T-statistic from
points 1 and 2 are identical to those obtained by checking whether the p-
value (see also Sect. 10.2.6) is smaller than , in which case we also reject
the null hypothesis.
Example 11.7.1
Recall Examples 11.2.1, 11.3.2, 11.3.3, 11.4.1, 11.5.1, and 11.6.4 where we
analysed the association of exercising time and time of recovery from knee
surgery. We can use R to obtain a full inductive summary of the linear model:
The standard errors are listed under “Std. Error”. Given that
, and therefore , we can construct a
confidence interval for age:
The interval does not include 0, and we therefore conclude that there is an
association between exercising time and recovery time. The random error
involved in the data is not sufficient to explain the deviation of
from 0 (given ).
We therefore reject the null hypothesis that . This can also be seen by
comparing the test statistic (listed under “t value” and obtained by
) with . Moreover,
. We can say that there is a significant association
between exercising and recovery time.
Example 11.7.2
In this chapter, we have already explored the associations between branch,
operator, and bill with delivery time for the pizza data (Appendix A.4). If we fit
a multiple linear model including all of the three variables, we obtain the
following results:
By looking at the p-values in the last column, one can easily see (without
calculating the confidence intervals or evaluating the t-statistic) that there is a
significant association between the bill and delivery time; also, it is evident that
the average delivery time in the branch in the East is significantly different (
3 min less) from the central branch (which is the reference category here).
However, the estimated difference in delivery times for both the branches in the
West and the operator was not found to be significantly different from zero. We
conclude that some variables in the model are associated with delivery time,
while for others, we could not show the existence of such an association. The
last line of the output confirms this: the overall F-test has a test statistic of 97.87
which is larger than ; therefore, the p-value is
smaller 0.05 ( ) and the null hypothesis is rejected. The test suggests
that there is at least one variable which is associated with delivery time.
5. The least squares estimator of has the smallest variance among all linear
and unbiased estimators (best linear unbiased estimator) of .
We do not discuss the technical details of these properties in detail. It is more
important to know that the least squares estimator has good features and that is
why we choose it for fitting the model. Since we use a “good” estimator, we
expect that the model will also be “good”. One may ask whether it is possible to
use a different estimator. We have already made distributional assumptions in the
model: we require the errors to be normally distributed, given that it is indeed
possible to apply the maximum likelihood principle to obtain estimates for and
in our set-up.
Theorem 11.7.1
For the linear model (11.24), the least squares estimator and the maximum
likelihood estimator for are identical. However, the maximum likelihood
estimator of is of which is a biased estimator of , but it is
asymptotically unbiased.
How to obtain the maximum likelihood estimator for the linear model is
presented in Appendix C.6. The important message of Theorem 11.7.1 is that no
matter whether we apply the least squares principle or the maximum likelihood
principle, we always obtain ; this does not apply when estimating
the variance, but it is an obvious choice to go for the unbiased estimator (11.27)
in any given analysis.
11.7.2 The ANOVA Table
A table that is frequently shown by software packages when performing
regression analysis is the analysis of variance (ANOVA) table. This table can
have several meanings and interpretations and may look slightly different
depending on the context in which it is used. We focus here on its interpretation
i) as a way to summarize the effect of categorical variables and ii) as a
hypothesis test to compare k means. This is illustrated in the following example.
Example 11.7.3
Recall Example 11.6.3 where we established the relationship between branch
and delivery time as
We see that the average delivery time of the branches in the East and the
Centre (reference category) is different and that the average delivery time of the
branches in the West and the Centre is different (because ). This is useful
information, but it does not answer the question if branch as a whole influences
the delivery time. It seems that this is the case, but the hypothesis we may have
in mind may be
which corresponds to
in the context of the regression model. These are two identical hypotheses
because in the regression set-up, we are essentially comparing three conditional
means . The ANOVA table summarizes the
corresponding F-Test which tests this hypothesis:
We see that the null hypothesis of 3 equal means is rejected because p is
close to zero.
What does this table mean more generally? If we deal with linear regression with
one (possibly categorical) covariate, the table will be as follows:
Res SQ MSE= SQ /
The table summarizes the sum of squares regression and residuals (see
Sect. 11.3 for the definition), standardizes them by using the appropriate degrees
of freedom (df), and uses the corresponding ratio as the F-statistic. Note that in
the above example, this corresponds to the overall F-test introduced earlier. The
overall F-test tests the hypothesis that any is different from zero which is
identical to the hypothesis above. Thus, if we fit a linear regression model with
one variable, the ANOVA table will yield the same conclusions as the overall F-
test which we obtain through the main summary. However, if we consider a
multiple linear regression model, the ANOVA table may give us more
information.
Example 11.7.4
Suppose we are not only interested in the branch, but also in how the pizza
delivery times are affected by operator and driver. We may for example
hypothesize that the delivery time is identical for all drivers given they deliver
for the same branch and speak to the same operator. In a standard regression
output, we would get 4 coefficients for 5 drivers which would help us to
compare the average delivery time of each driver to the reference driver; it
would however not tell us if on an average, they all have the same delivery time.
The overall F-test would not help us either because it would test if any is
different from zero which includes coefficients from branch and operator, not
only driver. Using the anova command yields the results of the F-test of
interest:
We see that the null hypothesis of equal delivery times of the drivers is
rejected. We can also test other hypotheses with this output: for instance, the null
hypothesis of equal delivery times for each operator is not rejected because
.
11.7.3 Interactions
It may be possible that the joint effect of some covariates affects the response.
For example, drug concentrations may have a different effect on men, woman,
and children; a fertilizer could work differently in different geographical areas;
or a new education programme may show benefit only with certain teachers. It is
fairly simple to target such questions in linear regression analysis by using
interactions . Interactions are measured as the product of two (or more)
variables. If either one or both variables are categorical, then one simply builds
products for each dummy variable, thus creating new variables
when dealing with two categorical variables (with k and l categories
respectively). These product terms are called interactions and estimate how an
association of one variable differs with respect to the values of the other
variable. Now, we give examples for continuous–categorical, categorical–
categorical, and continuous–continuous interactions for two variables and .
Categorical–Continuous Interaction. Suppose one variable is
categorical with k categories, and the other variable is continuous. Then,
new variables have to be created, each consisting of the product of the
continuous variable and a dummy variable, . These
variables are added to the regression model in addition to the main effects
relating to and as follows:
It follows that for the reference category of , the effect of is just
(because each is zero since each is zero). However, the effect for all
other categories is where refers to . Therefore, the association
between and the outcome differs by between category j and the reference
category. Testing thus helps to identify whether there is an interaction
effect with respect to category j or not.
Example 11.7.5
Consider again the pizza data described in Appendix A.4. We may be interested
in whether the association of delivery time and temperature varies with respect
to branch. In addition to time and branch, we therefore need additional
interaction variables. Since there are 3 branches, we need interaction
variables which are essentially the product of (i) time and branch “East” and (ii)
time and branch “West”. This can be achieved in R by using either the “
” operator (which will create both the main and interaction effects) or the
“ : ” operator (which only creates the interaction term).
Fig. 11.7 Interaction of delivery time and branch in Example 11.7.5
The main effects of the model tell us that the temperature is almost 11
degrees higher for the eastern branch compared to the central branch (reference)
and about 1 degree higher for the western branch. However, only the former
difference is significantly different from 0 (since the p-value is smaller than
). Moreover, the longer the delivery time, the lower the temperature
(0.29 degrees for each minute). The parameter estimates related to the interaction
imply that this association differs by branch: the estimate is indeed
for the reference branch in the Centre, but the estimate for the branch in the East
is and the estimate for the branch in the West is
. However, the latter difference in the association of time
and temperature is not significantly different from zero. We therefore conclude
that the delivery time and pizza temperature are negatively associated and this is
more strongly pronounced in the eastern branch compared to the other two
branches. It is also possible to visualize this by means of a separate regression
line for each branch, see Fig. 11.7.
Centre: ,
East: ,
West: .
One can see that the pizzas delivered by the branch in the East are overall hotter
but longer delivery times level that benefit off. One might speculate that the
delivery boxes from the eastern branch are not properly closed and therefore—
despite the overall good performance—the temperature falls more rapidly over
time for this branch.
Categorical–Categorical Interaction. For two categorical variables and
, with k and l categories, respectively, new dummy variables
need to be created as follows:
Example 11.7.6
Consider again the pizza data. If we have the hypothesis that the delivery time
depends on the operator (who receives the phone calls), but the effect is different
for different branches, then a regression model with branch (3 categories, 2
dummy variables), operator (2 categories, one dummy variable), and their
interaction (2 new variables) can be used.
The interaction terms can be interpreted as follows:
If we are interested in the operator, we see that the delivery time is on
average 0.21 min shorter for operator “Melissa”. When this operator deals
with a branch other than the reference (Centre), the estimate changes to
min longer delivery in the case of branch “East” and
min for branch “West”.
If we are interested in the branches, we observe that the delivery time is
shortest for the eastern branch which has on average a 5.66 min shorter
delivery time than the central branch. However, this is the estimate for the
reference operator only; if operator “Melissa” is in charge, then the
difference in delivery times for the two branches is only
min. The same applies when comparing the western branch with the central
branch: depending on the operator, the difference between these two
branches is estimated as either or min,
respectively.
The interaction terms are not significantly different from zero. We therefore
conclude that the hypothesis of different delivery times for the two
operators, possibly varying by branch, could not be confirmed.
Example 11.7.7
If we again consider the pizza data, with pizza temperature as an outcome, we
may wish to test for an interaction of bill and time.
The R output above reveals that there is a significant interaction effect. The
interpretation is more difficult here. It is clear that a longer delivery time and a
larger bill decrease the pizza’s temperature. However, for a large product of bill
and time (i.e., when both are large), these associations are less pronounced
because the negative coefficients become more and more outweighed by the
positive interaction term. On the contrary, for a small bill and short delivery
time, the temperature can be expected to be quite high.
(11.29)
where is obtained from the estimated covariance matrix via
Example 11.7.8
Recall Example 11.7.5 where we estimated the association between pizza
temperature, delivery time, and branch. There was a significant interaction effect
for time and branch. Using R, we obtain the covariance matrix as follows:
The point estimate for the association between time and temperature in the
eastern branch is (see Example 11.7.5). The standard error is
calculated as . The confidence interval is
therefore .
Remark 11.7.1
If there is more than one interaction term, then it is generally possible to test
whether there is an overall interaction effect, such as in the
case of four interaction variables. These tests belong to the general class of
“linear hypotheses”, and they are not explained in this book. It is also possible to
create interactions between more than two variables. However, the interpretation
then becomes difficult.
Example 11.8.1
In Fig. 11.6, the association of exercising time and recovery time after knee
surgery is modelled linearly, quadratically, and cubically; in Fig. 11.5, this
association is modelled by means of a square-root transformation. The model
summary in R returns both (under “Multiple R-squared”) and the adjusted
(under “adjusted R-squared”). The results are as follows:
It can be seen that is larger for the models with more variables; i.e. the
cubic model (which includes three variables) has the largest . The adjusted ,
which takes the different model sizes into account, favours the model with the
square-root transformation. This model provides therefore the best fit to the data
among the four models considered using .
(11.31)
The smaller the AIC, the better the model. The AIC takes not only the fit to the
data via into account but also the parsimony of the model via the term
. It is in this sense a more mature criterion than which considers only
the fit to the data via . Akaike’s Information Criterion can be calculated in
R via the extractAIC() command. There are also other commands, but the
results differ slightly because the different formulae use different constant terms.
However, no matter what formula is used, the results regarding the model choice
are always the same.
Example 11.8.2
Consider again Example 11.8.1. preferred the model which includes
exercising time via a square-root transformation over the other options. The AIC
value for the model where exercise time is modelled linearly is 60.59, when
modelled quadratically 61.84, when modelled cubically 62.4, and 60.19 for a
square-root transformation. Thus, in line with , the AIC also prefers the
square-root transformation.
Backward selection. Two models, which differ only by one variable , can be
compared by simply looking at the test result for : if the null hypothesis is
rejected, the variable is kept in the model; otherwise, the other model is chosen.
If there are more than two models, then it is better to consider a systematic
approach to comparing them. For example, suppose we have 10 potentially
relevant variables and we are not sure which of them to include in the final
model. There are possible different combinations of variables and in
turn so many choices of models! The inclusion or deletion of variables can be
done systematically with various procedures, for example with backward
selection (also known as backward elimination) as follows:
1. Start with the full model which contains all variables of interest,
.
Example 11.8.3
Consider the pizza data: if delivery time is the outcome, and branch, bill,
operator, driver, temperature, and number of ordered pizzas are potential
covariates, we may decide to include only the relevant variables in a final model.
Using the stepAIC function of the library(MASS) allows us implementing
backward selection with R.
At the first step, the R output states that the AIC of the full model is 4277.56.
Then, the AIC values are displayed when variables are removed: 4275.9 if
operator is removed, 4279.2 if driver is removed, and so on. One can see that the
AIC is minimized if operator is excluded.
Now R fits the model without operator. The AIC is 4275.88. Excluding
further variables does not improve the model with respect to the AIC.
We therefore conclude that the final “best” model includes all variables
considered, except operator. Using stepwise selection (with option both instead
of back) yields the same results. We could now interpret the summary of the
chosen model.
Small deviations from the normality assumption are not a major problem as
the least squares estimator of remains unbiased. However, confidence intervals
and tests of hypothesis rely on this assumption, and particularly for small sample
sizes and/or stronger violations of the normality assumptions, these intervals and
conclusions from tests of hypothesis no longer remain valid. In some cases, these
problems can be fixed by transforming .
Heteroscedasticity. If errors have a constant variance, we say that the errors
are homoscedastic and we call this phenomenon homoscedasticity . When the
variance of errors is not constant, we say that the errors are heteroscedastic. This
phenomenon is also known as heteroscedasticity . If the variance depends on
i, then the variability of the will be different for different groups of
observations. For example, the daily expenditure on food ( ) may vary more
among persons with a high income ( ), so fitting a linear model yields stronger
variability of the among higher income groups. Plotting the fitted values (or
alternatively ) against the standardized residuals (or a transformation thereof)
can help to detect whether or not problems exist. If there is no pattern in the plot
(random plot) , then there is likely no violation of the assumption (see
Fig. 11.10a). However, if there is a systematic trend, i.e. higher/lower variability
for higher/lower , then this indicates heteroscedasticity (Fig. 11.10b, trumpet
plot ). The consequences are again that confidence intervals and tests may no
longer be correct; again, in some situations, a transformation of can help.
Example 11.9.1
Recall Example 11.8.3 where we identified a good model to describe the
delivery time for the pizza data. Branch, bill, temperature, number of pizzas
ordered, and driver were found to be associated with delivery time. To explore
whether the normality assumption is fulfilled for this model, we can create a
histogram for the standardized residuals (using the R function rstandard()).
A QQ-plot is contained in the various model diagnostic plots of the plot()
function.
Plotting the fitted values against the square root of the absolute values of
the standardized residuals ( , used by R for stability reasons) yields a plot
with no visible structure (see Fig. 11.10a). There is no indication of
heteroscedasticity.
(a) The research hypothesis is that a high BMI relates to a high blood pressure.
Estimate the linear model where blood pressure is the outcome and BMI is
the covariate. Interpret the coefficients.
Exercise 11.2
A psychologist speculates that spending a lot of time on the internet has a
negative effect on children’s sleep. Consider the following data on hours of deep
sleep (Y) and hours spent on the internet (X) where and are the observations
on internet time and deep sleep time of the child respectively:
Child i 1 2 3 4 5 6 7 8 9
Internet time (in h) 0.3 2.2 0.5 0.7 1.0 1.8 3.0 0.2 2.3
Sleep time (in h) 5.8 4.4 6.5 5.8 5.6 5.0 4.8 6.0 6.1
(a) Estimate the linear regression model for the given data and interpret the
coefficients.
(c) Reproduce the results of a) and b) in R and plot the regression line.
(d) Now assume that we only distinguish between spending more than 1 hour
on the internet ( ) and spending less than (or equal to) one hour on the
internet ( ). Estimate the linear model again and compare the results.
How can now be interpreted? Describe how changes if those who
spend more than one hour on the internet are coded as 0 and the others as 1.
Exercise 11.3
Consider the following data on weight and height of 17 female students:
Student i 1 2 3 4 5 6 7 8 9
Weight 68 58 53 60 59 60 55 62 58
Height 174 164 164 165 170 168 167 166 160
Student i 10 11 12 13 14 15 16 17
Weight y 53 53 50 64 77 60 63 69
Height x 160 163 157 168 179 170 168 170
(b) Now estimate and interpret the linear regression model where “weight” is
the outcome.
(d) Now produce a scatter plot of the data (manually or by using R) and
interpret it.
(e) Add the following two points to the scatter plot and
. Speculate how the linear regression estimate will
change after adding these points.
(g) Given the results of the two regression models: What are the general
implications with respect to the least squares estimator of ?
Exercise 11.4
To study the association of the monthly average temperature (in C, X) and hotel
occupation (in ), we consider data from three cities: Polenca (Mallorca,
Spain) as a summer holiday destination, Davos (Switzerland) as a winter skiing
destination, and Basel (Switzerland) as a business destination.
(a) Interpret the following regression model output where the outcome is
“hotel occupation” and “temperature” is the covariate.
(b) Interpret the following output where “city” is treated as a covariate and
“hotel occupation” is the outcome.
(c) Interpret the following output and compare it with the output from b):
(d) In the following multiple linear regression model, both “city” and
“temperature” are treated as covariates. How can the coefficients be
interpreted?
(e) Now consider the regression model for hotel occupation and temperature
fitted separately for each city: How can the results be interpreted and what
are the implications with respect to the models estimated in (a)–(d)? How
can the models be improved?
(f) Describe what the design matrix will look like if city, temperature, and the
interaction between them are included in a regression model.
(g) If the model described in (f) is fitted the output is as follows:
Exercise 11.5
The theatre data (see Appendix A.4) describes the monthly expenditure on
theatre visits of 699 residents of a Swiss city. It captures not only the expenditure
on theatre visits (in SFR) but also age, gender, yearly income (in 1000 SFR), and
expenditure on cultural activities in general as well as expenditure on theatre
visits in the preceding year.
(a) The summary of the multiple linear model where expenditure on theatre
visits is the outcome is as follows:
How can the missing values [1] and [2] be calculated?
(c) Given the diagnostics in (b), how can the model be improved? Plot a
histogram of theatre expenditure in R if you need further insight.
(e) Judge the quality of the model from d) by means of Figs. 11.12a and
11.12b. What do they look like compared with those from b)?
Fig. 11.11 Checking the model assumptions
Exercise 11.6
Consider the pizza delivery data described in Appendix A.4.
(a) Read the data into R. Fit a multiple linear regression model with delivery
time as the outcome and temperature, branch, day, operator, driver, bill,
number of ordered pizzas, and discount customer as covariates. Give a
summary of the coefficients.
(d) Now use R to estimate both and . Compare the results with the
model output from a).
(e) Use backward selection by means of the stepAIC function from the
library MASS to find the best model according to AIC.
(f) Obtain from the model identified in e) and compare it to the full model
from a).
(h) Are all variables from the model in (e) causing the delivery time to be
either delayed or improved?
(j) Use the model identified in (e) to predict the delivery time of the last
captured delivery (i.e. number 1266). Use the predict() command to
ease the calculation of the prediction.
Source Toutenburg, H., Heumann, C., Deskriptive Statistik, 7th edition, 2009,
Springer, Heidelberg
Appendix: Introduction to R A
Background
The open-source software R was developed as a free implementation of the
language S which was designed as a language for statistical computation,
statistical programming, and graphics. The main intention was to allow users to
explore data in an easy and interactive way, supported by meaningful graphical
representations. The statistical software R was originally created by Ross Ihaka
and Robert Gentleman (University of Auckland, New Zealand).
Fig. A.1 Screenshot of R Studio with the command window ( top left ), the output window (“console”, i.e.
R itself, bottom left ), and a plot window ( right ). Other windows (e.g. about the environment or package
updates) are closed
and
yield
and
respectively.
Basic data structures are vectors, matrices, arrays, lists , and data frames
. They can contain numeric values, logical values or even characters
(strings). In the latter case, arithmetic operations are not allowed.
– A numeric vector of length 5 can be constructed by the command
creates a matrix, where the data values are the natural numbers
which are stored row-wise in the matrix,
The factor command is very useful to store nominal variables, and the
command ordered is ideal for ordinal variables. Both commands are
extremely important since factor variables with more than two categories
are automatically expanded into several columns of dummy variables if
necessary, e.g. if they are included as covariates in a linear model. In the
previous paragraph, two factor variables have already been created. This
can be confirmed by typing
which return the value TRUE . Have a look at the following two factor
variables:
Please note that by default alphabetical order is used to order the categories
(e.g. female is coded as 1 and male as 2). However, the mapping of integers
to strings can be controlled by the user as seen for the variable “grade”:
returns
Basic arithmetic operations can be applied directly to a numeric vector.
Basic operations are addition , subtraction −, multiplication and
division / , integer division , modulo operation , and
exponentiation with two possible notations: or . Examples are given as:
is expanded to
The last example shows that R gives a warning if the length of the shorter
vector cannot be expanded to the length of the longer vector by a simple
multiplication with a natural number ( ). Here
is expanded to
are “recycled”.
More on indexing
The standard ways of accessing/indexing elements in a vector, matrix, list, or
data frame have already been introduced above, but R allows a lot more flexible
accessing of elements.
Standard Functions
Some standard functions and their roles in R are
All functions can again be applied directly to numeric vectors.
Statistical Functions
Some statistical functions and their roles in R are
Note : the arguments of the functions vary depending on the chosen method. For
example, the mean() function can be applied to general R objects where
averaging makes sense (numeric or logical vectors, but also, e.g. matrices). The
functions var(), cov(), cor() expect one or two numeric vectors,
matrices, or data frames. Minimum and maximum functions work also with a
comma-separated list of values, i.e.
Examples:
Note that var(), cov() use the factor for the unbiased estimate of
the variance instead of 1 / n for the empirical variance, i.e. 1 / 3 in the example
above. Both functions can also be applied to several vectors or a matrix. Then
the covariance matrix (and correlation matrix in case of cor() ) is computed.
For example, consider two variables
Then both commands return the symmetric covariance matrix (with variances as
the diagonal entries and covariances as the non-diagonal entries).
The (Pearson) correlation between the two variables is calculated as 0.9954293.
The Spearman rank correlation is perfectly 1, since both vectors are in increasing
order:
Factorial:
returns 5! as
Binomial coefficient :
returns as
Mathematical Constants
The number is a mathematical constant, the ratio of a circle’s circumference to
its diameter, and is commonly approximated as 3.14159. It is directly available
in R as pi .
returns
Statistical Functions
Now we consider some basic statistical functions in R . For illustration, we use
the painters data in the following example. This data is available after
loading the library MASS (only a subset is shown below). The data lists the
subjective assessment, on a 0 to 20 integer scale, of 54 classical painters. The
painters were assessed on four characteristics: composition, drawing, colour, and
expression. The data is due to the eighteenth-century art critic, de Piles. Use ?
painters for more information on the data set.
shows
yields
returns
Note that the explained structure is an important one: we access the rows and
columns of a matrix or data set by using the [rows,columns] argument.
Here we access all rows for which the variable “school” is “F”. If, in addition,
we also want to restrict the data set to the first two variables, we can write:
Similarly,
i.e. those painters with a drawing score between 6 and 9 (= any number
which matches 6, or 7, or 8, or 9).
returns
returns
See also the command order for showing the order of vector elements.
Calculating ranks;
Removing duplicates:
returns
Random Variables
R has built-in functions for several probability density/mass functions
(PMF/PDF), (probability) distribution function (i.e. the CDF), quantile
functions and for generating random numbers.
The function names use the following scheme:
Examples:
If all three commands are executed, then the sequence is (using the standard
random generator)
Function Distribution
beta Beta
binom Binomial
cauchy Cauchy
exp Exponential
gamma Gamma
geom Geometric
hyper Hypergeometric
lnorm Log–normal
norm Normal
pois Poisson
unif Uniform
mvnorm Multivariate normal (in package mvtnorm )
Test distributions
Function Distribution
chisq
f F
signrank Wilcoxon signed rank (1 sample)
t t
wilcox Wilcoxon rank sum (2 samples)
For convenience, we list a few important PDF and CDF values in Sect. C .
Decathlon Data
This data ( decathlon.csv , see also Table A.2 ) describes the results of the
decathlon competition during the 2004 Olympic Games in Athens. The
performance of all 30 athletes in the 100 m race (in seconds), long jump (in
metres), shot-put (in metres), high jump (in metres), 400 m race (in seconds),
110 m hurdles race (in seconds), discus competition (in metres), pole vault (in
metres), javelin competition (in metres), and 1500 m race (in seconds) are
recorded in the data set.
Table A.2 First few rows of the decathlon data from the 2004 Olympic Games in Athens data
Theatre Data
This data ( theatre.csv , see also Table A.3 ) summarizes a survey
conducted on 699 participants in a big Swiss city. The survey participants are all
frequent visitors to a local theatre and were asked about their age, sex (gender,
female 1), annual income (in 1000 SFR), general expenditure on cultural
activities (“Culture”, in SFR per month), expenditure on theatre visits (in SFR
per month), and their estimated expenditure on theatre visits in the year before
the survey was done (in SFR per month).
Table A.3 First few rows of the theatre data
Solutions to Exercises B
Solutions to Chapter 1
Solution to Exercise 1.1
(a) The population consists of all employees of the airline. This may include
administration staff, pilots, stewards, cleaning personnel, and others. Each
single employee relates to an observation in the survey.
(b) The population comprises all students who take part in the examination.
Each student represents an observation.
(c) All people suffering high blood pressure in the study area (city, province,
country, ), are the population of interest. Each of these persons is an
observation.
(b) Typically the level of a computer game is measured on an ordinal scale: for
example, level 10 may be more difficult than level 5, but this does not
imply that level 10 is twice as difficult as level 5, or that the difference in
difficulty between levels 2 and 3 is the same as the difference between
levels 10 and 11.
(d) This variable is measured on a continuous scale (ratio scale). Typically, the
age is captured in years starting from the day of birth.
(g) The scale of ID numbers is nominal. The ID number may indeed consist of
numbers; however, “112233” does not refer to something half as
much/good as “224466”. The number is descriptive.
(h) The final rank is measured on an ordinal scale. The ranks can be clearly
ordered, and the participants can be ranked by using their final results.
However the first winner may not have “double” the beauty of the second
winner, it is merely a ranking.
(a) The data is provided in .csv format. We thus read it in with the
read.csv() command (after we have set a working directory with
setwd() ):
(b) The data can be viewed by means of the fix() or View() command or
simply being printed:
(c) We can access the data, as for any matrix, by using squared brackets [, ],
see also Appendix A.1. The first entry in the brackets refers to the row and
the second entry to the columns. Each entry either is empty (referring to
every row/column) or consists of a vector or sequence describing the
columns/rows we want to select. This means that we obtain the first 5 rows
and variables via pizza[1:5,1:5] . If we give the new data the name
“pizza2” we simply need to type:
We can save this new data either as a .dat file (with write.table() ),
or as a .csv file (with write.csv() ), or directly as an R data file (with
save() ) which gives us access to our entire R session.
(d) We can access any variable by means of the $ sign. If we type pizza$new
we create a new variable in the pizza data set called “new”. Therefore, a
simple way to add a variable to the data set is as follows:
(e)
(f) We can apply all these commands onto the object “pizza”. The command
str(pizza) gives us an overview of the data set, see also Fig. B.1 . The
output shows that the data set contains 1266 observations (deliveries) and
13 variables. We can also see which of these variables are factors
(categorical with defined categories) and which are numerical. We also see
the first actual numbers for the each variable and the coding scheme used
for the categories of the factor variables. The command dim summarizes
the dimension of the data, i.e. the number of rows and columns of the data
matrix. Colnames gives us the names of the variables from the data set,
and so does names . The commands nrow and ncol give us the number
of rows and columns, respectively. Applying head and tail to the data
prints the first and last rows of the data, respectively.
Fig. B.1 Applying str() to the pizza data
(b) There are different options to ask for parents’ attitudes: of course one could
simply ask “what do you think of immunization?”; however, capturing long
answers in a variable “attitude” may make it difficult to summarize and
distil the information obtained. A common way to deal with such variables
is to translate a concept into a score: for example, one could ask 5
“yes/no”-type questions (instead of one general question) which relate to
attitudes towards immunization, such as “do you think immunization may
be harmful for your child?” or “do you agree that it is a priority to
immunize infants in their first year of life?” The number of answers that
show a positive attitude towards immunization can be summed up. If there
are 5 questions, there are up to 5 points “to earn”. Thus, each parent may
be asked 5 questions and his/her attitude can be summarized on a scale
ranging from 0 to 5, depending on the answers given.
(a) The table shows the relative frequencies of each party and not the absolute
frequencies. We can thus draw a bar chart where the relative frequencies of
votes are plotted on the y -axis and different parties are plotted on the x -
axis. In R , we can first type in the data by defining two vectors and then
use the “barplot” command to draw the bar chart (Fig. B.2 a). Typing “?
barplot” and “?par” shows the graphical options to improve the
presentation and look of the graph:
(b) There are several options to compare the results. Of course, one can simply
plot the two bar charts with each bar chart representing one election. It
would be important for this solution to ensure that the y -axes are identical
in both the plots. However, if we want to compare the election results in
one graph then we can plot the difference between the two elections, i.e.
the win/loss per party. The bar chart for the new variable “difference in
proportion of votes between the two elections” is shown in Fig. B.2 and is
obtained as follows:
Fig. B.2 Bar charts for national elections in South Africa
Remark
Another solution would be to create subcategories on the x -axis: for example,
there would be categories such as “ANC 2009 results” and “ANC 2014 results”,
followed by “DA 2009 results” and “DA 2014 results”, and so on.
(a) The scale of X is continuous. However, please note that the number of
values X can practically take is limited (90 min plus extra time, recorded in
1 min intervals).
Table B.1 Frequency table and other information for the variable “time until first goal”
j F(x)
1 [0, 15) 19 15
2 [15, 30) 17 15
3 [30, 45) 6 15
4 [45, 60) 5 15
5 [60, 75) 4 15
6 [75, 90) 2 15
7 [90, 96) 2 6 1
Total 56 1
Fig. B.4 Empirical cumulative distribution function for the variable “time until first goal”
(c) We need to obtain the heights for each category to obtain the histogram using
, see Table B.1 .
(d) We obtain the histogram and kernel density plot in R (Fig. B.3 a) using the
commands
(e) The ECDF values for F ( x ) are calculated using the relative frequencies f ( x
), see Table B.1 .
(f) (i) We can easily plot the ECDF (Fig. B.4 a) for the original data using the
R command
(ii) Generating the ECDF for the grouped data requires more effort and is
not necessarily intuitive: first we categorize the continuous variable
using the function cut . We use the label option to indicate that the
name of each category corresponds to the upper limit of the respective
interval. This new variable is a “factor” variable and the plot.ecdf
function does not work with this type of variable. We need to first
change the “factor” variable into a “character” variable with strings
corresponding to the labels and coerce this into numeric values. Then
we use plot.ecdf , see also Fig. B.4 b. Alternatively, we can
directly replace the raw values with numbers corresponding to the upper
interval limits.
(g) To solve the exercises, we simply use Formula ( 2.11 ) and Rules ( 2.3 ff.)
(i) .
(ii) .
(iii)
.
(a) We obtain the relative frequencies for the first and fourth intervals as 0.125
( ). Accordingly, for the other two intervals, we obtain
frequencies of .
(b) We obtain the absolute frequencies for the first and fourth intervals as 250 (
). For the other intervals, we obtain 750 ( ).
Solution to Exercise 2.4
(a) The absolute frequencies are evident from the following table:
1 8 14 0.25 0.25 6 11
2 14 22 0.40 0.15 75 8 18
3 22 34 0.75 0.35 175 12 28
4 34 50 0.97 0.22 110 16 42
5 50 82 1.00 0.03 15 32 66
(b) We obtain .
(a) The data needed to calculate and draw the ECDF is summarized in Table
B.2 ; the ECDF is plotted in Fig. B.5 .
Score 1 2 3 4 5 6 7 8 9 10
Results 1 3 8 8 27 30 11 6 4 2
1
Fig. B.5 Empirical cumulative distribution function for the variable “score”
(c) The grey solid line in Fig. B.5 summarizes the ECDF for the grouped data.
It goes from (0, 0) to (1, 1) with a breakpoint at (5, 0.47) since
summarizes the information for the group “disagree”. Using ( 2.11 ) we
calculate:
(d) The results of (b) and (c) are very different. The formula applied in (c)
assumes that the values in each category are uniformly distributed, i.e. that
within each category, each value occurs as often as each other value.
However, we know from (a) that this is not true: there are more values
towards the central score numbers. The assumption used in (c) is therefore
inappropriate as also highlighted in Fig. B.5 .
(b) We can create the histogram as follows, see also Fig. B.6 b:
One can see that the distributions for both variables are not symmetric. For
example, when looking at the distance hiked, the difference between the
median and the first quartile ( ) is much larger than the
difference between the median and the third quartile ( ); this
indicates a distribution that is skewed towards the left.
(d) To answer this question, we can use the rules of linear transformations.
(e) To draw the box plots, we can use the results from (a), (b), and (c) in this
solution. The median, first quartile, third quartile, and the interquartile
range have already been calculated. It is also easy to determine the
minimum and maximum values from the table of ordered values in (a).
What is missing is the knowledge of whether to treat some of the values as
extreme values or not. For the distance hiked, an extreme value would be
any value or . It follows that there
is only one extreme value: 29.9 km. For the maximum altitude, there is no
extreme value because there is no value or
. The box plots are shown in Fig. B.7 a, b.
Fig. B.7 Box plots for Exercise 3.1
4/10 8/10 1
To estimate the weighted median, we need to determine the class for which
holds. This is clearly the case for the second class . Thus
(g) If the raw data is known, then the variance for the grouped data will be
identical to the variance calculated in (c). For educational purposes, we
show the identity here. The variance for grouped data can be calculated as:
We thus get
We further get
and
The variance is . The approximation is
therefore good. However, please note that the between-class variance was
estimated too low, but the within-class variance was estimated too high;
only the combination of the two variance components led to reasonable
results in this example. It is evident that the approximation in the third
class was not ideal. The middle of the interval, 25, was not a good proxy
for the true mean in this class, 28.65.
We can use the quantile function, together with the probs option, to
get the quantiles:
However, the reader will see that the results differ slightly from our results
obtained in (b). As noted in Example 3.1.5 , R offers nine different ways to
obtain quantiles, each of which can be chosen by the type argument. The
difference between these options cannot be understood easily without a
background in probability theory. It may, however, be worth highlighting
that we get the same results as in (b) if we choose the type=2 option in
this example. The interquartile ranges can be calculated by means of the
difference of the quantiles obtained above. To determine the mean absolute
deviation, we have to program our own function:
We can calculate the variance using the var command. However, as noted
in Example 3.2.4 , on p. 52, R uses rather than 1 / n when
calculating the variance. This important alternative formula for variance
estimation is explained in Chap. 9, Theorem 9.2.1 . To obtain the results
from (c), we hence need to multiply the output from R by :
(a) We need to solve the equation that defines the arithmetic mean:
(c) It is not possible to use the coefficient of variation because some of the
values are negative.
Using the formula for the arithmetic mean for grouped data,
we further get
This yields
(b) To predict the number of members in 2018, we could apply the average
growth rate to the number of members in 2016 for two consecutive years:
(c) We could use the approach outlined in (b) to predict the number of
members in 2025. However, this assumes that the average growth rate
between 2011 and 2016 remains valid until 2025. This is rather unrealistic.
The number of members of the club increases in some years, but decreases
in other years. There is no guarantee that the pattern observed until 2016
can be generalized to the future. This highlights that statistical
methodology should in general be used with care when making long-term
future predictions.
(i) 1 2 3 4
Number of members 10 8 8 4
Rel. number of members 10/30 8/30 8/30 4/30
Money invested 40 60 70 80
Rel. amount per group 40/250 60/250 70/250 80/250
(e) The Gini coefficient can be calculated using formula ( 3.37 ) as follows:
This function will work in general, though it returns a character vector. Using
as.numeric is one option to make the character string numeric, if necessary.
(a) In this exercise, we do not have individual data; i.e. we do not know how
much each inhabitant earns. The summarized data simply tells us about the
wealth of two groups. For simplicity and for illustrative purposes, we
assume that the wealth is equally distributed in each group. We determine
as (0.8, 0.1) and (1, 1) because 80 % of the population earn 10 % of
the wealth and 100 % of the population earn everything. The respective
Lorenz curve is illustrated in Fig. B.9 a.
Fig. B.9 Lorenz curves
(b) The upper class lost its wealth. This means that 20 % of the population do
not own anything at all. However, the remaining 80 % owns the rest. This
yields of (0.2, 0) and (1, 1), see also Fig. B.9 b.
(c) In this scenario, 20 % of the population leave the country. However, the
remaining 80 %—which are now 100 % of the population—earn the rest.
The money is equally distributed between the population. Figure B.9 c
shows this situation.
(a) It is necessary to use the harmonic mean to calculate the average speed.
Using , and
we get
(b) Travelling at 45.2 km/h means travelling about 361 km in 8 h. The bus will
not be in time.
(c) The Gini coefficient remains the same as the relative investment stays the
same.
(d) Using the library ineq we can easily reproduce the results in R :
(a) The easiest way to get all these measures is to use the summary function
and apply it to the data columns which relate to quantitative variables:
We then get the following output:
The results are 48.62 min for delivery time and 79.87 C for temperature.
This means 99 % of the delivery times are less than or equal to 48.62 min
and 1 % of deliveries are greater than or equal to 48.62 min. Similarly, only
1 % of pizzas were delivered with a temperature greater than 79.87 C.
(c) The following simple function calculates the absolute mean deviation:
(d) We can use the scale , mean , and var commands, respectively.
As one would expect, the mean is zero and the variance is 1 for the scaled
variable.
(e) The boxplot command draws a box plot; the range option specifies the
range for which extreme values are defined. As specified in the help files,
range=0 produces a box plot with no extreme values.
(f) We use the cut command to create a variable which has the categories
, respectively. Using the interval mid-
points, as well as the relative frequencies in each class (obtained via the
table command), we get:
The weighted mean is very similar to the mean from the raw data, see
output above.
Café ( i )
1 3 1 6 2 −1 1
2 8 4 7 3 1 1
3 7 3 10 5 −2 4
4 9 5 8 4 1 1
5 5 2 4 1 1 1
(b) Above we have assigned ranks in an increasing order; i.e. the lowest
gets the lowest rank (1) and the highest gets the highest rank (5). If we
use decreasing order and assign the lowest rank to the highest values, we
get the following results:
Café ( i )
1 3 5 6 4 1 1
2 8 2 7 3 −1 1
3 7 3 10 1 2 4
4 9 1 8 2 −1 1
5 5 4 4 5 −1 1
XY
Coffee Bad 2 1
Quality Good 3 4
Satisfied Unsatisfied
Car
Motorbike
We therefore have
The small values of V and confirm that the association is rather weak.
Satisfied Unsatisfied
Car 62 56
Motorbike 12 20
The chances of being satisfied with the insurance are 1.845 times higher
among those who drive a car.
(c) All -based statistics suggest that there is only a small association
between the two variables, for both the and the tables. However,
the odds ratio gives us a more nuanced interpretation, showing that
customers driving a car are somewhat more satisfied with their insurance.
The question is whether the additional information from the odds ratio is
stable and trustworthy. Confidence intervals for the odds ratio can provide
guidance under such circumstances, see Sect. 9.4.4 for more details.
Fig. B.12 Scatter diagram for speed limit and number of deaths
(a) The scatter plot is given in Fig. B.12 . The black circles show the five
observations. A positive relationship can be discovered: the higher the
speed limit, the higher the number of deaths. However, “Italy” (the
observation on the top right) is the observation which gives the graph a
visible pattern and drives our impression about the potential relationship.
and therefore
Country ( i )
This yields
Please note that above we averaged the ranks for ties. The R function cor
uses a more complicated approach; this is why the results differ slightly
when using R .
(c) The results stay the same. Pearson’s correlation coefficient is invariant with
respect to linear transformations which means that it does not matter
whether we use miles/h or km/h.
(d) (i) The grey square in Fig. B.12 represents the additional observation.
The overall pattern of the scatter plot changes with this new
observation pair: the positive relationship between speed limit and
number of traffic deaths is not so clear anymore. This emphasizes
how individual observations may affect our impression of a scatter
plot.
and therefore .
There are several relative risks that can be calculated, for example:
The proportion of passengers who were rescued was 2.16 times higher
in the 1./2. class compared to the 3. class and staff. Similarly, the
proportion of passengers who were not rescued was 0.62 times lower in the
1./2. class compared to the 3. class and staff. The odds ratio is
. This is nothing but the ratio of the relative risks,
i.e. 2.16 / 0.63. The chance of being rescued (i.e. the ratio rescued/not
rescued) was almost 3.5 times higher for the 1./2. class compared to the 3.
class and staff.
(b) The scatter plot shows no clear pattern. This explains why the correlation
coefficient is close to 0. However, if we look only at the points for each
city separately, we see different structures for different cities: a possible
negative relationship for Davos (D), a rather positive relationship for
Polenca (P) and no visible relationship for Basel (B). This makes sense
because for winter holiday destinations hotel occupancy should be higher
when the temperature is low and for summer holiday destinations
occupancy should be high in the summer months.
This yields correlation coefficients of for Davos, 0.42 for Basel and
0.82 for Polenca. It is obvious that looking at X and Y only indicates no
correlation, but the information from Z shows strong linear relationships in
subgroups. This example shows the limitations of using correlation
coefficients and the necessity to think in a multivariate way. Regression
models offer solutions. We refer the reader to Chap. 11 , in particular Sect.
11.7.3 for more details.
(a) We use the visual rule of thumb and work from the top left to the bottom
right for the concordant pairs and from the top right to the bottom left for
the discordant pairs:
(b)
Use a leash
Agree or no. Disagree Total
Use for concerts Agree or no. 137 9 146
Disagree 2 10 12
Total 139 19 158
(d) The relative risk can either be summarized as:
The proportion of those who disagree with using the park for summer
concerts is 0.03 times lower in the group who agree or have no opinion
about using leashes for dogs compared to those who disagree. Similarly,
the proportion of those who disagree with using the park for summer
concerts is 36.6 times higher in the group who also disagree with using
leashes for dogs compared to those who do not disagree.
(f)
(g) In general, it makes sense to use all the information available, i.e. to use the
ordinal structure of the data and all three categories. While it is clear that
is superior to V in our example, one may argue that the relative risks or the
odds ratio could be more useful because they provide an intuitive
quantification on how the two variables relate to each other rather than just
giving a summary of strength and direction of association. However, as we
have seen earlier, the interpretations of the relative risks and the odds ratio
are quite clumsy in this example. It can be difficult to follow the somewhat
complicated interpretation. A simple summary would be to say that
agreement with both questions was strongly associated ( ).
(a) We read in the data, make sure the first column is recognized as containing
the row names, attach the data, and obtain the correlation using the cor
command:
(b) There are 10 variables. For the first variable, we can calculate the
correlation with 9 other variables. For the second variable, we can also
calculate the correlation with 9 other variables. However, we have already
calculated one out of the 9 correlations, i.e. when analysing variable
number one. So it is sufficient to calculate 8 correlations for the second
variable. Similarly, we need another 7 correlations for the third variable, 6
correlations for the fourth variable, and so on. In total, we therefore need to
have correlations. Since the correlation coefficient
describes the relationship between two variables, it makes sense to
summarize the results in a contingency table, similar to a matrix, see Fig.
B.13 .
(d) One way to omit rows with missing data automatically is to use the
na.omit command:
We can see that there is a higher proportion of high temperature ((65, 100])
in the category of short delivery times ((0, 30]) compared to long delivery
times ((30, 100]).
(b) Using the data from (a), we can calculate the odds ratio:
Thus, the chances of receiving a cold pizza are 0.18 lower if the delivery
time is short.
(c) We use the vcd and ryouready packages to determine the desired
measures of association:
(d) The scatter plot (Fig. B.14 b) shows a decreasing temperature for an
increasing delivery time. This is also highlighted in the correlation
coefficients which are and for Bravais–Pearson and Spearman,
respectively.
handshakes in total.
(a) The customer takes the beers “with replacement” because the customer can
choose among any type of beer for each position in the tray. One can also
think of an urn model with 6 balls relating to the six different beers, where
beers are drawn with replacement and placed on the tray. The order in
which the beers are placed on the tray is not relevant. We thus have
combinations.
(b) If the customer insists on having at least one beer per brewery on his tray,
then 6 out of the 20 positions of the tray are already occupied. Based on the
same thoughts as in (a), we calculate the total number of combinations as
(a) and
(b)
Different jury members are allowed to assign the same scores. We thus deal with
combinations “with replacement”. To verify this, just think of an urn with 61
balls where each ball refers to one possible score. Now one ball is drawn,
assigned to a specific jury member and then put back into the urn. Since each
score is “attached” to a particular jury member, we have combinations with
consideration of the order and therefore obtain a total of
possibilities. If you have difficulties in understanding the role of “replacement”
and “order” in this example, recall that each member has 61 scoring options:
thus, (9 times) combinations are possible.
(a) We obtain:
(b) Based on the observations from (a) we conclude that each entry on the
diagonal line can be represented by . The sum of two consecutive
that:
Chapter 6
Solution to Exercise 6.1
(a) We obtain
.
.
.
.
because the events are pairwise disjoint.
.
.
(c)
(d) We are interested in the probability of the person failing exactly in one
exam. This corresponds to
.
(b) The number of favourable simple events is because the person draws
two presents out of the ten “wrong” presents:
. In Sect. 8.1.8 , we
(a) Let V denote the event that there is too much salt in the soup and let L
denote the event that the chef is in love. We know that
Similarly, we have
We therefore get:
V Total
(b) The variables are not stochastically independent since, for example,
.
(a) We define the following events: G Basil is treated well, Basil is not
treated well; E Basil survives, Basil dies. We know that
Using the Law of Total Probability, we get
because someone who never pays will always pay too late.
(a) We are interested in P ( M ), the probability that someone does not pay his
bill in a particular month, either because he is not able to or he pays too
late. We can use the Law of Total Probability to obtain the results:
(c) If the bill was not paid in a particular month, the probability is 20.8 % that
it will never be paid, and 78.2 % that it will be paid. One could argue that a
preventive measure that affects almost 79 % of trustworthy customers are
not ideal and the bank should therefore not block a credit card if a bill is
not paid on time.
(a) The “and” operator refers to the joint distribution of two variables. In our
example, we want to know the probability of being infected and having
been transported by the truck. This probability can be directly obtained
from the respective entry in the contingency table: 40 out of 200 cows fulfil
both criteria and thus
(a) The two shots are independent of each other and thus
(b) We need to calculate
(c)
Chapter 7
Solution to Exercise 7.1
(b) We know from Theorem 7.2.3 that for any continuous variable
and therefore . We calculate
.
We thus obtain .
The manipulated die yields on average higher values than a fair die because
its expectation is . The variability of is, however, similar because
.
Comparing the results from (a) and (b) shows clearly that .
Recall that . It is interesting to see that for some
transformations T ( X ) it holds that , but for some it does
not. This reminds us to be careful when thinking of the implications of
transformations.
Fig. B.15 Cumulative distribution function for the proportion of wine sold
(a) There are several ways to plot the CDF. One possibility is to define the
function and plot it with the curve command. Since the function has
different definitions for the intervals , we need to take
this into account. Remember that a logical statement in R corresponds to a
number, i.e. TRUE = 1 and FALSE = 0; we can thus simply add the
different pieces of the function and multiply them with a condition which
specifies if X is contained in the interval or not (Fig. B.15 ):
(d) We have already defined the CDF in (a). We can now simply plug in the x -
values of interest:
(i) :
(ii) :
(b) We calculate
and therefore
(d)
(a) The marginal distributions are obtained by the row and column sums of the
joint PDF, respectively. For example, .
X Y
0 3/4 1 1/6
1 1/4 2 7 / 12
3 1/4
The marginal distribution of X tells us how many customers sought help via
the telephone hotline (75 %) and via email (25 %). The marginal
distribution of Y represents the distribution of the satisfaction level,
highlighting that more than half of the customers (7/12) were “satisfied”.
Among those who used the email customer service two-thirds were
unsatisfied, one-third were satisfied, and no one was very satisfied.
(d) As we know from ( 7.27 ), two discrete random variables are said to be
independent if . However, in our
example, . This means that X
and Y are not independent.
Y
0 1 2
−1 0.3 0.2 0.2
X
2 0.1 0.1 0.1
(b) The marginal distributions are obtained from the row and column sums of
the joint PDF, respectively:
X −1 2 Y 0 1 2
P 0.7 0.3 0.4 0.3 0.3
(d) The joint distribution of X and Y can be used to obtain the desired
distribution of U . For example, If and , then .
The respective probability is because
and there is no other combination of X -
and Y -values which yields . The distribution of U is therefore as
follows:
k −1 0 1 2 3 4
0.3 0.2 0.2 0.1 0.1 0.1
(e) We calculate
It can be seen that . This makes sense
because we know from ( 7.31 ) that . However,
. This follows from ( 7.7.1 )
which says that and therefore,
only if the covariance is 0. We know from (c)
that X and Y are not independent and thus .
(a) The random variables X and Y are independent if the balls are drawn with
replacement. This becomes clear by understanding that drawing with
replacement implies that for both the draws, the same balls are in the urn
and the conditions in each draw remain the same. The first draw has no
implications for the second draw.
If we were drawing the balls without replacement, then the first draw
could possibly have implications for the second draw: for instance, if the
first ball drawn was red, then the second one could not be red because there
is only one red ball in the urn. This means that drawing without
replacement implies dependency of X and Y . This can also be seen by
evaluating the independence assumption ( 7.27 ):
(b) The marginal probabilities can be obtained from the given
information. For example, 3 out of 8 balls are black and thus
. The conditional distributions can be calculated easily by
realizing that under the assumed dependency of X and Y , the second draw
is always based on 7 balls (8 balls minus the one drawn under the condition
)—e.g. if the first ball drawn is black, then 7 balls, 2 of which are
black, remain in the urn and . We thus calculate
and obtain
Y
1 2 3
1
X 2 0
and therefore
which is
for .
(c) To determine , we need the cumulative marginal distribution of X
:
(a)
This relates to
(a) It seems appropriate to model the number of fused bulbs with a Poisson
distribution. We assume, however, that the probabilities of fused bulbs on
two consecutive days are independent of each other; i.e. they only depend
on but not on the time t .
which means that, on an average, 1.73 bulbs are fused per day. The
variance is
We see that mean and variance are similar, which is an indication that the
choice of a Poisson distribution is appropriate since we assume
and .
(c) The following table lists the proportions (i.e. relative frequencies )
together with the probabilities from a Po (1.73)-distribution. As a
reference, we also list the probabilities from a Po (2)-distribution since it is
not practically possible that 1.73 bulbs stop working and it may hence be
an option to round the mean.
Po (1.73) Po (2)
One can see that observed proportions and expected probabilities are close
together which indicates again that the choice of a Poisson distribution was
appropriate. Chapter 9 gives more details on how to estimate parameters,
such as , from data if it is unknown.
This means that, on average, it takes more than half a day until one of the
bulbs gets fused.
The output shows that at least 64 tickets need to be bought to have a 99%
guarantee that at least three tickets win. This equates to spending € 96.
Figure B.16 shows the relationship between the number of tickets bought
and the probability of having at least three winning tickets.
(c) The solution of (a) shows that it is well worth taking part in the raffle:
Marco pays €96 and with a probability of 99 % and he wins at least three
prizes which are worth € . More generally, the money
generated by the raffle is € , but the prizes are worth €
. One may suspect that the company produces the
appliances much more cheaply than they are sold for and is thus so
generous.
Fig. B.16 Probability to have at least three winning tickets given the number of tickets bought
(a) We are dealing with a geometric distribution here. Since we are interested
in , we can calculate:
(a) The random variable Y follows a Poisson distribution, see Theorem 8.2.1
for more details.
(b) The fisherman catches, on average, 3 fish an hour. We can thus assume that
the rate is 3 and thus . Similarly, which means
that it takes, on average, 20 min to catch another fish.
We would have obtained the same results in R using the dpois(5,3) and
dpois(0,3) commands.
(b) The probability of choosing lemon tart for the first two guests is 1. We thus
need to determine the probability that 3 out of the remaining 3 guests order
lemon tart:
Using dmultinom(c(0,0,3),prob=c(0.2,0.3,0.5)) in R , we
get the same result.
We see that the mean of the arithmetic means is close to zero, but not
exactly zero. The variance is approximately , as one
would expect from the Central Limit Theorem. The distribution is
symmetric, similar to a normal distribution, see Fig. B.17 . It follows that
is approximately distributed, as one could expect from the
Theorem of Large Numbers and the Central Limit Theorem. It is important
to understand that is not fixed but a random variable which follows a
distribution, i.e. the normal distribution.
(b) We can use the same code as above, except we use the exponential instead
of the normal distribution:
The realizations are i.i.d. observations. Once can see that, as in a), is
approximately distributed. It is evident that the do
not necessarily need to follow a normal distribution for to follow a
normal distribution, see also Fig. B.18 a.
(c) Increasing the number of repetitions makes the distribution look closer to a
normal distribution, see Fig. B.18 b. This visualizes that as n tends to
infinity gets closer to a -distribution.
Fig. B.17 Kernel density plot of the distribution simulated in Exercise 8.11a
Fig. B.18 Kernel density plots for Exercises 8.11b and 8.11c
Chapter 9
Solution to Exercise 9.1
(b) Using the results from (a) we can write the log-likelihood function for
as:
because . We can write down this function in R as follows:
Figure B.19 shows the log-likelihood function. It can be seen that the
function reaches its maximum at .
Using we calculate as
Note that this equates to the PDF from Definition 8.2.1 for and . The
likelihood function is therefore
(b) To calculate the MSE we need to determine the bias and the variance of the
estimators as we know from Eq. ( 9.5 ). It follows from (a) that both
estimators are unbiased and hence the bias is 0. For the variances we get:
Since the mean squared error consists of the sum of the variance and
squared bias, the MSE for and are and , respectively.
One can see that the larger n , the more superior over in terms
of the mean squared error. In other words, is more efficient than
because its variance is lower for any .
(b) The variance is unknown and needs to be estimated. We thus need the t -
distribution to construct the confidence interval. We can determine
using qt(0.975,23) or Table C.2 (though the latter is not
detailed enough), and . This yields
(c) If the coin is fair, we can use as our prior judgement. We would
then need
Using R we get:
This result is different because the above command does not use the normal
approximation. In addition, p is rather small which means that care must be
exercised when using the results of the confidence interval with normal
approximation in this context.
(b) The point estimate of 10.6 % is substantially higher than 3.2 %. The lower
bound confidence interval is still larger than the proportion of failures at
county level. This indicates that the school is doing worse than most other
schools in the county.
is
(b) There is the danger of selection bias. A total of 500 households refused to
take part in the study. It may be that their preferences regarding TV shows
are different from the other 2500 households. For example, it may well be
possible that those watching almost no TV refuse to be included; or that
those watching TV shows which are considered embarrassing by society
are refusing as well. In general, missing data may cause point estimates to
be biased, depending on the underlying mechanism causing the absence.
To calculate a confidence interval with a width of not more than 0.2 s, the
results of at least 21 athletes are needed.
(b) The sample size is 30. Thus, the confidence interval width should be
smaller than 0.2 s. This is indeed true as the calculations show:
The athlete’s best time is thus below the lower confidence limit. He is
among the top 10 % of all athletes, using the results of the confidence
interval.
This means the chances that a pizza arrives in time are about 1.08 times
higher for Laura compared with Melissa. To calculate the 95 % confidence
interval, we need , and
(b) We can reproduce the results in R by attaching the pizza data, creating a
categorical delivery time variable (using cut ) and then applying the
oddsratio command from the library epitools onto the contingency
table:
Chapter 10
Solution to Exercise 10.1
A type I error is defined as the probability of rejecting if is true. This error
occurs if A thinks that B does confess, but B does not. In this scenario, A
confesses, goes free, and B serves a three-year sentence. A type II error is
defined as the probability of accepting , despite the fact that is wrong. In
this case, B does confess, but A does not. In this scenario, B goes free and A
serves a three-year sentence. A type II error is therefore worse for A . With a
statistical test, we always control the type I error, but not the type II error.
(b) It is a one-sample problem for : thus, for known variance, the Gauss test
can be used; the t -test otherwise, see also Appendix D . Since, in this
exercise, is assumed to be known we can use the Gauss test; i.e. we can
compare the test statistic with the normal distribution (as opposed to the t -
distribution when using the t -test). The sample size is small: we must
therefore assume a normal distribution for the data.
(c) To calculate the realized test statistic, we need the arithmetic mean
. Then we obtain
(e) The test statistic is the same as in (b): . However, the critical
region changes. gets rejected if . Again, the null
hypothesis is rejected. The producer was right in hypothesizing that the
average weight of his chocolate bars was lower than 100 g.
(a) We calculate
No auction Auction
16.949 10.995
2.948 2.461
s 1.717 1.569
v 0.101 0.143
Note that we use the unbiased estimates for the variance and the standard
deviation as explained in Chap. 9 ; i.e. we use rather than 1 / n .
It is evident that the mean price of the auctions ( ) is lower than the
mean non-auction price ( ), and also lower than the price from the
online book store. There is, however, a higher variability in relation to the
mean for the auction prices. One may speculate that the best offers are
found in the auctions, but that there are no major differences between the
online store and the internet book store, i.e.
€16.95,
€16.95,
.
Using the decision rules from Table 10.2 , we conclude that gets
rejected if . We can calculate either by using
Table C.2 or by using R ( qt(0.975,13) ). Since , we
keep the null hypothesis. There is not enough evidence to suggest that the
prices of the online store differ from €16.95 (which is the price from the
internet book store).
(c) Using ( 9.6 ) we can calculate the upper and lower limits of the confidence
interval as,
(e) We need to conduct two tests: the two-sample t -test for the assumption of
equal variances and the Welch test for the assumption of different
variances.
(i) Two-sample t -test :
(ii) Welch test : For the Welch test, we calculate the test statistic, using (
10.6 ) as:
(f) The F -test relies on the assumption of the normal distribution and tests the
hypotheses:
Value 9.3 9.52 9.54 10.01 10.5 10.5 10.55 10.59 11.02 11.03
Sample a a a a a a a a a a
Rank 1 2 3 4 5 6 7 8 9 10
Value 11.89 11.99 12 13.79 15.49 15.9 15.9 15.9 15.9 15.9
Sample a a a na a na na na na na
Rank 11 12 13 14 15 16 17 18 19 20
Value 15.99 16.98 16.98 17.72 18.19 18.19 19.97 19.97
Sample na na na na na na na na
Rank 21 22 23 24 25 26 27 28
We can calculate the rank sums as and
, respectively. Thus
(h) We can type in the data and evaluate the summary statistics using the
mean, var , and sd commands:
The t.test command can be used to answer questions (b)–(e). For (b)
and (c) we use
The test decision can be made by means of either the p -value (
) or the confidence interval ([15.95; 17.94], which covers
16.95). To answer (d) and (e) we need to make use of the option
alternative which specifies the alternative hypothesis:
Note that the two-sample test provides a confidence interval for the
difference of the means. Questions (f) and (g) can be easily solved by using
the var.test and wilcox.test commands:
Person i 1 2 3 4 5 6 7 8 9 10
Before diet 80 95 70 82 71 70 120 105 111 90
After diet 78 94 69 83 65 69 118 103 112 88
Differences d 2 1 1 −1 6 1 2 2 −1 2
Using and
Since the confidence interval does not overlap with zero, we reject the null
hypothesis; there is enough evidence that the mean difference is different (i.e.
greater) from zero. While the test is significant and suggests a weight difference,
it is worth noting that the mean benefit from the diet is only 1.5 kg. Whether this
is a relevant reduction in weight has to be decided by the ten people
experimenting with the diet.
Using R we get
The test statistic ( ) does not fall outside the critical region ([14;
32]); therefore, the null hypothesis is not rejected. The same result is
obtained by using the binomial test in R :
binom.test(c(30,200),p=0.1) . This yields a p -value of 0.11239
and a confidence interval covering 10 % ([0.09; 0.18]). Interestingly, the
approximate binomial test rejects the null hypothesis, whereas the exact
test keeps it. Since the latter test is more precise, it is recommended to
follow its outcome.
(c) The research hypothesis is that the new machine produces fewer deficient
shirts:
This yields:
The null hypothesis is rejected because
. This means we accept the
alternative hypothesis that the new machine is better than the old one.
Machine 1 Machine 2
Deficient 30 7
Fine 200 112
This yields a p -value of 0.0438 suggesting, as the test in (c), that the null
hypothesis should be rejected. Note that the confidence interval for the
odds ratio, also reported by R , does not necessarily yield the same
conclusion as the test of Fisher.
(a) To compare the two means, we should use the Welch test (since the
variances cannot be assumed to be equal). The test statistic is
Value 6 29 40 47 47 64 87 88 91 98
Sample 2 2 2 2 2 2 2 1 1 2
Rank 1 2 3 4 5 6 7 8 9 10
Value 99 99 101 104 105 108 111 112 261 351
Sample 1 1 1 1 1 1 1 1 2 2
Rank 11 12 13 14 15 16 17 18 19 20
This gives us
(a) To answer this question, we need to conduct the -independence test. The
test statistic t ( x , y ) is identical to Pearson’s statistic, introduced in
Chap. 4 . In Exercise 4.4, we have calculated this statistic already,
, see p. 350 for the details. The null hypothesis of independence is rejected
if . With (number of rows), and
(number of columns) we get using Table C.3 (or
qchisq(0.95,3) in R ). Since , we reject the null hypothesis
of independence.
(b) The output refers to a test of homogeneity: the null hypothesis is that the
proportion of passengers rescued is identical for the different travel classes.
This hypothesis is rejected because p is smaller than . It is evident
that the proportions of rescued passengers in the first two classes (60 %,
43.9 %) are much higher than in the other classes (25 %, 23.8 %). One can
see that the test statistic (182.06) is identical to (a). This is not surprising:
the -independence test and the test of homogeneity are technically
identical, but the null hypotheses differ. In (a), we showed that “rescue
status” and “travel class” are independent; in (b), we have seen that the
conditional distributions of rescue status given travel class differ by travel
class, i.e. that the proportions of those rescued differ by the categories 1.
class/2. class/3. class/staff.
(c) In both (a) and (b), there exists a true difference in the mean. However,
only in (b) is the t -test able to detect the difference. This highlights that
smaller differences can only be detected if the sample size is sufficiently
large. However, if the sample size is very large, it may well be that the test
detects a difference where there is no difference.
(a) After reading in and attaching the data, we can simply use the t.test
command to compare the expenditure of the two samples:
We see that the null hypothesis is not rejected because (also,
the confidence interval overlaps with “0”).
(b) A two-sample t -test and a U -test yield the same conclusion. The p -values,
obtained with
are 0.1946 and 0.145, respectively. Interestingly, the test statistic of the
two-sample t -test is almost identical to the one from the Welch test in (a) (
)—this indicates that the assumption of equal variances may not be
unreasonable.
(c) We can use the usual t.test command together with the option
alternative = ’greater’ to get the solution.
Here, p is much smaller than 0.001; hence, the null hypothesis can be
rejected. Women spend more on theatre visits than men.
We cannot confirm that the mean delivery time is less than 30 min and that
the mean temperature is greater than 65 C. This is not surprising: we
have already seen in Exercise 3.10 that the manager should be unsatisfied
with the performance of his company.
(b) We can use the exact binomial test to investigate and
. For the binom.test command, we need to know the
numbers of successes and failures, i.e. the number of deliveries where a
free wine should have been given to the customer. Applying the table
commands yields 229 and 1037 deliveries, respectively.
(c) We first need to create a new categorical variable (using cut ) which
divides the temperatures into two parts: below and above 65 C. Then we
can simply apply the test commands ( fisher.test , chisq.test ,
prop.test ) to the table of branch and temperature:
We know that the two tests lead to identical results. For both of them the
p -value is 0.2283 which suggests that we should keep the null hypothesis.
There is not enough evidence which would support that the proportion of
hot pizzas differs by operator, i.e. that the two variables are independent!
The test of Fisher yields the same result ( ).
(d) The null hypothesis is that the proportion of deliveries is the same for each
branch: . To test this hypothesis, we need a
goodness-of-fit test:
We can see that the proportions are almost identical and that the null
hypothesis is not rejected ( ).
Calculating
leads
to
For children who spend no time on the internet at all, this model predicts
6.16 h of deep sleep. Each hour spent on the internet decreases the time in
deep sleep by 0.45 h which is 27 min.
About 45 % of the variation can be explained by the model. The fit of the
model to the data is neither very good nor very bad.
(c) After collecting the data in two vectors ( c() ), printing a summary of the
linear model ( summary(lm()) ) reproduces the results. A scatter plot
can be produced by plotting the two vectors against each other ( plot() ).
The regression line can be added with abline() :
The plot is displayed in Fig. B.20 .
Thus, those children who are on the internet for a long time (i.e. >1 h) sleep
on average 0.85 h ( 51 min) less. If we change the coding of 1’s and 0’s,
will just have a different sign: . In this case, we can conclude
that children who spend less time on the internet sleep on average 0.85 h
longer than children who spend more time on the internet. This is the same
conclusion and highlights that the choice of coding does not affect the
interpretation of the results.
Fig. B.20 Scatter plot and regression line for the association of internet use and deep sleep
This indicates strong positive correlation: the higher the height, the higher
the weight. Since , we already know that the fit of a
linear regression model will be good (no matter whether height or weight
is treated as outcome). From ( 11.11 ), we also know that will be
positive.
(d)– The black dots in Fig. B.21 show the scatter plot of the data. There is
(g) clearly a positive association in that greater height implies greater weight.
This is also emphasized by the regression line estimated in (b). The two
additional points appear in dark grey in the plot. It is obvious that they do
not match the pattern observed in the original 17 data points. One may
therefore speculate that with the inclusion of the two new points will be
smaller. To estimate the new regression line we need
This yields
This shows that the two added points shrink the estimate from 1.129 to
0.28. The association becomes less clear. This is an insightful example
showing that least squares estimates are generally sensitive to outliers
which can potentially affect the results.
Fig. B.21 Scatter plot and regression line for both 17 and 19 observations
(a) The point estimate of suggests a 0.077 % increase of hotel occupation for
each one degree increase in temperature. However, the null hypothesis of
cannot be rejected because . We therefore cannot
show an association between temperature and hotel occupation.
(b) The average hotel occupation is higher in Davos (7.9 %) and Polenca
(0.9 %) compared with Basel (reference category). However, these
differences are not significant. Both and
cannot be rejected. The model cannot show a significant difference in hotel
occupation between Davos/Polenca and Basel.
(c) The analysis of variance table tells us that the null hypothesis of equal
average temperatures in the three cities ( ) cannot be rejected.
Note that in this example the overall F -test would have given us the same
results.
(d) In the multivariate model, the main conclusions of (a) and (b) do not
change: testing never leads to the rejection of the null
hypothesis. We cannot show an association between temperature and hotel
occupation (given the city); and we cannot show an association between
city and hotel occupation (given the temperature).
(e) Stratifying the data yields considerably different results compared to (a)–
(c): In Davos, where tourists go for skiing, each increase of 1 C relates to
a drop in hotel occupation of 2.7 %. The estimate is also
significantly different from zero ( ). In Polenca, a summer
holiday destination, an increase of 1 C implies an increase of hotel
occupation of almost 4 %. This estimate is also significantly different from
zero ( ). In Basel, a business destination, there is a
somewhat higher hotel occupation for higher temperatures ( );
however, the estimate is not significantly different from zero. While there
is no overall association between temperature and hotel occupation (see (a)
and (c)), there is an association between them if one looks at the different
cities separately. This suggests that an interaction between temperature and
city should be included in the model.
(f) The design matrix contains a column of 1’s (to model the intercept), the
temperature and two dummies for the categorical variable “city” because it
has three categories. The matrix also contains the interaction terms which
are both the product of temperature and Davos and temperature and
Polenca. The matrix has 36 rows because there are 36 observations: 12 for
each city.
(h) From (f) it follows that the point estimates for are 1.31 for Basel,
for Davos, and 3.97 for Polenca. Confidence intervals for these
estimates can be obtained via ( 11.29 ):
We calculate . With
(obtained via from the model output or from the second row and
second column of the covariance matrix),
, and also
we obtain:
The 95 % confidence intervals are therefore:
(b)– The plot on the left shows that the residuals are certainly not normally
(c) distributed as required by the model assumptions. The dots do not
approximately match the bisecting line. There are too many high positive
residuals which means that we are likely dealing with a right-skewed
distribution of residuals. The plot on the right looks alright: no systematic
pattern can be seen; it is a random plot. The histogram of both theatre
expenditure and log(theatre expenditure) suggests that a log-
transformation may improve the model, see Fig. B.22 . Log-
transformations are often helpful when the outcome’s distribution is
skewed to the right.
(e) While in (b) the residuals were clearly not normally distributed, this
assumption seems to be fulfilled now: the QQ-plot shows dots which lie
approximately on the bisecting line. The fitted values versus residuals plot
remains a chaos plot. In conclusion, the log-transformation of the
outcome helped to improve the quality of the model.
(a) The multivariate model is obtained by using the lm() command and
separating the covariates with the sign. Applying summary() to the
model returns the comprehensive summary.
The output shows that lower temperature, higher bills, and more ordered
pizzas increase the delivery times. The branch in the East is the fastest, and
so is the driver Domenico. While there are differences with respect to day,
discount customers, and the operator, they are not significant at the 5 %
level.
(b) The confidence intervals are calculated as: . We know
from the model output from (a) that there are 1248 degrees of freedom
(1266 observations − 18 estimated coefficients). The respective quantile
from the t -distribution is obtained with the qt() function. The
coefficients are accessed via the coefficients command (alternatively:
mp$coefficients ); the variances of the coefficients are either
accessed via the diagonal elements of the covariance matrix (
diag(vcov(mp)) ) or the model summary ( summary(mp)[[4]]
[,2] )—both of which are laborious. The summary of coefficients, lower
confidence limit (lcl), and upper confidence limit (ucl) may be summarized
in a matrix, e.g. via merging the individual columns with the cbind
command.
(c) The variance is estimated as the residual sum of squares divided by the
degrees of freedom, see also ( 11.27 ). Applying the residuals
command to the model and using other basic operations yields an estimated
variance of 28.86936.
The output shows that the full model has an AIC of 4275.15. The smallest
AIC is achieved by removing the operator variable from the model.
The reduced model has an AIC of 4273.37. Removing the discount
customer variable from the model yields an improved AIC (
).
The model selection procedure stops here as removing any variable would
only increase the AIC, not decrease it.
The final model, based on backward selection with AIC, includes day,
driver, branch, number of pizzas ordered, temperature, and bill.
(f) Fitting the linear model with the variables obtained from (e) and obtaining
the summary of it yields an of 0.3092.
This is only marginally higher than the goodness of fit from the full model
( ). While the selected model is better than the model with
all variables, both, with respect to AIC and , the results are very close
and remind us of the possible instability of applying automated model
selection procedures.
(h) Not all variables identified in (e) represent necessarily a “cause” for
delayed or improved delivery time. It makes sense to speculate that
because many pizzas are being delivered (and need to be made!) the
delivery time increases. There might also be reasons why a certain driver is
improving the delivery time: maybe he does not care about red lights. This
could be investigated further given the results of the model above.
However, high temperature does not cause the delivery time to be shorter;
likely it is the other way around: the temperature is hotter because the
delivery time is shorter. However, all of these considerations remain
speculation. A regression model only exhibits associations. If there is a
significant association, we know that given an accepted error (e.g. 5 %),
values of are higher when values of are higher. This is useful but it
does not say whether caused or vice versa.
The prediction is 36.5 min and therefore 0.8 min higher than the real
delivery time.
Technical Appendix C
More details on Chap. 3
Proof of equation ( 3.27 ).
We obtain the following expressions for [ i ]–[ iii ]:
which means that we can summarize the above findings in the following
inequality:
and therefore
Definition C.1
A sequence of random variables, , converges stochastically to 0, if for
any
(C.2)
holds.
This is equivalent to .
Theorem C.1
(Theorem of large numbers) Consider n i.i.d. random variables with
and . It holds that
(C.3)
(C.4)
This means that for each , the right-hand side of the above equation tends
to 1 as which gives a similar interpretation as the Theorem of Large
Numbers.
Central Limit Theorem. Let ( ) be n i.i.d. random variables
with and . If we consider the sum , we obtain
and . If we want to standardize we can
use Theorem 7.3.2 to obtain
(C.5)
i.e. it holds that and .
Theorem C.2
(Central Limit Theorem) Let ( ) be n i.i.d. random variables with
and . denotes the standardized sum of .
The CDF of is
(C.7)
Note that is the Gamma function, defined as for positive
integers and otherwise.
PDF of the -Distribution. The PDF of the t -distribution, with n degrees of
freedom, is defined as
(C.8)
Note that is the Gamma function, defined as for positive
integers and otherwise.
PDF of the -Distribution. The PDF of the F -distribution, with n and m
degrees of freedom, respectively, is defined as
(C.9)
The PDF of the F -distribution with m and n degrees of freedom can be
derived by interchanging the roles of m and n .
Let . The Exact Test of Fisher uses the fact that the row marginal
frequencies and in the following table
Population B
Total Z
are fixed by the sample sizes and . Conditional on the total number of
successes (i.e. the column margins are assumed to be fixed), the only
remaining random variable is X (since the other three entries of the table are then
determined by the realization x of X and the margins). Under , it
can be shown that
i.e.
Note that in the equation above we use the fact that under is B (
n , p ) distributed; see the additivity theorem for the binomial distribution, i.e.
Theorem 8.1.1 .
Example C.1
Consider two competing lotteries A and B. Say we buy 10 tickets from each
lottery and test the hypothesis of equal winning probabilities. The data can be
summarized in a table:
with output
For the example data and , the null hypothesis is rejected, since the
p -value is lower than . For the calculation of the p -value, the one-sided and
two-sided cases have to be distinguished. The idea is that while fixing the
margins at the observed values (i.e. 25, 25, 8, 42), we have to calculate the sum
of the probabilities of all tables which have lower probability than the observed
table. In R , one can use the functions dhyper and phyper for calculating
(cumulative) probabilities. For example, can be calculated as
0 25 25
8 17 25
8 42 50
with probability dhyper(0,25,25,8) , which is
lower than . The sum is which is the (left) one-
sided p -value. In this case (not generally true!), the two-sided p -value is simply
two times the one-sided value, i.e. .
Remark C.1
The two-sided version of the Exact Test of Fisher can also be used as a test of
independence of two binary variables. It is equivalent to the test of equality of
two proportions, see Example 10.8.2 .
versus
or if
(C.10)
which is, under the null hypothesis, F -distributed with and
degrees of freedom, see also Sect. 8.3.3 .
Critical Region
Two-Sided Case. The motivation behind the construction of the critical
region for the two-sided case, : vs. : , is that if the null
hypothesis is true (i.e. the two variances are equal) then the test statistic ( C.10 )
would be 1; also, . Therefore, very small (but positive) and very large
values of should lead to a rejection of . The critical region can then
be written as , where and are critical values such that
Example C.2
Let and . Using the qf command in R , we can
determine the critical values as:
The results are and .
Remark C.2
There is the following relationship between quantiles of the F -distribution:
Test Decisions
Case Reject , if
(a) or
(b)
Remark C.3
We have tacitly assumed that the expected values and are unknown and
have to be estimated. However, this happens rarely, if ever, in practice. When
estimating the expected values by the arithmetic means, it would be appropriate
to increase the degrees of freedom from to and to .
Interestingly, standard software will not handle this case correctly.
Example C.3
A company is putting baked beans into cans. Two independent machines at two
sites are used. The filling weights are assumed to be normally distributed with
mean 1000 g. It is speculated that one machine is more precise than the other.
Two samples of the two machines give the following results:
Sample n
X 20 1000.49 72.38
Y 25 1000.26 45.42
Remark C.4
For the t -test, we remarked that the assumption of normality is not crucial
because the test statistic is approximately normally distributed, even for
moderate sample sizes. However, the F -test relies heavily on the assumption of
normality. This is why alternative tests are often used, for example the Levene’s
test.
More details on Chap. 11
Obtaining the Least Squares Estimates in the Linear Model. The function S (
a , b ) describes our optimization problem of minimizing the residual sum of
squares:
(C.11)
(C.12)
Now we set ( C.11 ) and ( C.12 ) as equal to zero, respectively:
This equates to
Multiplying (I ) by yields
Using leads to
If we use
and
Remark C.5
To show that the above solutions really relate to a minimum, and not to a
maximum, we would need to look at all the second-order partial derivatives of S
( a , b ) and prove that the bordered Hessian matrix containing these derivatives
is always positive definite. We omit this proof however.
Variance Decomposition.
We start with the following equation:
We therefore obtain
which equates to
The Relation between and .
We therefore obtain
Given that in the model is assumed to be fixed (i.e. non-stochastic and not
following any distribution), we obtain
How to Obtain the Variance of the Least Squares Estimator. With the
same arguments as above (i.e is fixed and non-stochastic) and applying the
rule from the scalar case to matrices we obtain:
We therefore have
Distribution Tables
See Tables C.1 , C.2 and C.3 .
Table C.1 CDF values for the standard normal distribution, . These values can also be obtained in R
by using the pnorm(p) command
z .00 .01 .02 .03 .04 .05 .06 .07 .08 .09
0.0 0.500000 0.503989 0.507978 0.511966 0.515953 0.519939 0.523922 0.527903 0.531881 0.535856
0.1 0.539828 0.543795 0.547758 0.551717 0.555670 0.559618 0.563559 0.567495 0.571424 0.575345
0.2 0.579260 0.583166 0.587064 0.590954 0.594835 0.598706 0.602568 0.606420 0.610261 0.614092
0.3 0.617911 0.621720 0.625516 0.629300 0.633072 0.636831 0.640576 0.644309 0.648027 0.651732
0.4 0.655422 0.659097 0.662757 0.666402 0.670031 0.673645 0.677242 0.680822 0.684386 0.687933
0.5 0.691462 0.694974 0.698468 0.701944 0.705401 0.708840 0.712260 0.715661 0.719043 0.722405
0.6 0.725747 0.729069 0.732371 0.735653 0.738914 0.742154 0.745373 0.748571 0.751748 0.754903
0.7 0.758036 0.761148 0.764238 0.767305 0.770350 0.773373 0.776373 0.779350 0.782305 0.785236
0.8 0.788145 0.791030 0.793892 0.796731 0.799546 0.802337 0.805105 0.807850 0.810570 0.813267
0.9 0.815940 0.818589 0.821214 0.823814 0.826391 0.828944 0.831472 0.833977 0.836457 0.838913
1.0 0.841345 0.843752 0.846136 0.848495 0.850830 0.853141 0.855428 0.857690 0.859929 0.862143
1.1 0.864334 0.866500 0.868643 0.870762 0.872857 0.874928 0.876976 0.879000 0.881000 0.882977
1.2 0.884930 0.886861 0.888768 0.890651 0.892512 0.894350 0.896165 0.897958 0.899727 0.901475
1.3 0.903200 0.904902 0.906582 0.908241 0.909877 0.911492 0.913085 0.914657 0.916207 0.917736
1.4 0.919243 0.920730 0.922196 0.923641 0.925066 0.926471 0.927855 0.929219 0.930563 0.931888
1.5 0.933193 0.934478 0.935745 0.936992 0.938220 0.939429 0.940620 0.941792 0.942947 0.944083
1.6 0.945201 0.946301 0.947384 0.948449 0.949497 0.950529 0.951543 0.952540 0.953521 0.954486
1.7 0.955435 0.956367 0.957284 0.958185 0.959070 0.959941 0.960796 0.961636 0.962462 0.963273
1.8 0.964070 0.964852 0.965620 0.966375 0.967116 0.967843 0.968557 0.969258 0.969946 0.970621
1.9 0.971283 0.971933 0.972571 0.973197 0.973810 0.974412 0.975002 0.975581 0.976148 0.976705
2.0 0.977250 0.977784 0.978308 0.978822 0.979325 0.979818 0.980301 0.980774 0.981237 0.981691
2.1 0.982136 0.982571 0.982997 0.983414 0.983823 0.984222 0.984614 0.984997 0.985371 0.985738
2.2 0.986097 0.986447 0.986791 0.987126 0.987455 0.987776 0.988089 0.988396 0.988696 0.988989
2.3 0.989276 0.989556 0.989830 0.990097 0.990358 0.990613 0.990863 0.991106 0.991344 0.991576
2.4 0.991802 0.992024 0.992240 0.992451 0.992656 0.992857 0.993053 0.993244 0.993431 0.993613
2.5 0.993790 0.993963 0.994132 0.994297 0.994457 0.994614 0.994766 0.994915 0.995060 0.995201
2.6 0.995339 0.995473 0.995604 0.995731 0.995855 0.995975 0.996093 0.996207 0.996319 0.996427
2.7 0.996533 0.996636 0.996736 0.996833 0.996928 0.997020 0.997110 0.997197 0.997282 0.997365
2.8 0.997445 0.997523 0.997599 0.997673 0.997744 0.997814 0.997882 0.997948 0.998012 0.998074
2.9 0.998134 0.998193 0.998250 0.998305 0.998359 0.998411 0.998462 0.998511 0.998559 0.998605
3.0 0.998650 0.998694 0.998736 0.998777 0.998817 0.998856 0.998893 0.998930 0.998965 0.998999
Table C.2 ( ) quantiles for the t -distribution. These values can also be obtained in R using the
qt(p,df) command.
df
Table C.3 ( ) quantiles of the -distribution. These values can also be obtained in R using the
qchisq(p,df) command
df
Visual Summaries D
Descriptive Data Analysis
Summary of Tests for Continuous and Ordinal Variables
Summary of Tests for Nominal Variables
References
Adler, J. (2012). R in a Nutshell . Boston: O’Reilly.
Bock, J. (1997). Bestimmung des Stichprobenumfangs . Munich: Oldenbourg Verlag. (in German).
Casella, G., & Berger, R. (2002). Statistical inference . Boston, MA: Cengage Learning.
[MATH]
Chow, S., Wang, H., & Shao, J. (2007). Sample size calculations in clinical research . London: Chapman
and Hall.
[MATH]
Everitt, B., & Hothorn, T. (2011). An introduction to applied multivariate analysis with . New York:
Springer.
Groves, R., Fowler, F., Couper, M., Lepkowski, J., Singer, E., & Tourangeau, R. (2009). Survey
methodology. Wiley series in survey methodology . Hoboken, NJ: Wiley.
[MATH]
Hernan, M., & Robins, J. (2017). Causal inference . Boca Raton: Chapman and Hall/CRC.
Hyndman, R. J., & Fan, Y. (1996). Sample quantiles in statistical packages. American Statistician , 50 ,
361–365.
Kauermann, G., & Küchenhoff, H. (2011). Stichproben - Methoden und praktische Umsetzung in R .
Heidelberg: Springer. (in German).
R Core Team (2016). R: A language and environment for statistical computing . Vienna, Austria: R
Foundation for Statistical Computing. http://www.R-project.org/ .
Young, G., & Smith, R. (2005). Essentials of statistical inference . Cambridge: Cambridge University Press.
[CrossRef][MATH]
Index
A
Absolute
deviation
mean deviation
median deviation
Additivity theorem
Akaike's information criterion (AIC)
Analysis of variance
ANOVA
Arithmetic mean
properties
weighted
Association
B
Backward selection
Bar chart
Behrens-Fisher problem
Bernoulli distribution
Bias
Binomial
coefficient
distribution
Bivariate random variables
Box plot
C
Calculation rules
CDF
expectation
normal random variables
probabilities
variance
Causation
CDF
calculation rules
joint
quantile
quartile
Central limit theorem
Central tendency
Certain event
distribution
variance test
-goodness-of-fit test
-independence test
-test of homogeneity
Coding
dummy
effect
Coefficient
binomial
of variation
regression
Combinations
with order
with replacement
without order
without replacement
Combinatorics
Complementary event
Composite event
Conditional
distribution
frequency distribution
probability
relative frequency distribution
Confidence
bound
interval
interval for ; known
interval for ; unknown
interval for
interval for the odds ratio
level
Consistency
Contingency
coefficient
table
Continuous variable
Convergence
stochastic
Correlation coefficient
of Bravais–Pearson
of Spearman
product moment
Covariance
Covariate
Cramer's V
Cross tabulation, see contingency table
Cumulative
distribution function
frequency, see frequency, cumulative
marginal distribution
D
Data
matrix
observation
set
transformation
unit
Decomposition
complete
Degenerate distribution
Density
Design matrix
Dispersion
absolute deviation
absolute mean deviation
absolute median deviation
mean squared error
measure
range
standard deviation
Distribution
Bernoulli
Binomial
conditional
conditional frequency
conditional relative frequency
continuous
cumulative marginal
degenerate
exponential
F
Gauss
geometric
hypergeometric
independent and identical
joint relative frequency
marginal
marginal frequency
marginal relative frequency
multinomial
normal
Poisson
standard
Student
t
uniform discrete
Duality
Dummy variable
E
Efficiency
Elementary event
Empirical cumulative distribution function (ECDF)
Epitools, see R packages
Error
type I
type II
Estimation
interval
least squares
maximum likelihood
method of moments
nonparametric
parametric
Event
additive theorem
certain
composite
disjoint
elementary
impossible
simple
sure
theorem of additivity
Expectation
calculation rules
Expected frequencies
Experiment
Laplace
random
Exponential distribution
F
Factorial function, see function, factorial
F-distribution
Fisher
exact test
Foreign, see R packages
Frequency
absolute
cumulative
expected
relative
table
F-Test
Function
cumulative distribution, see CDF
empirical cumulative distribution, see ECDF
factorial
joint cumulative distribution
joint probability distribution
probability mass, see PMF
step
G
Gamma
of Goodman and Kruskal
Gauss test
one-sample
two-sample
Generalized method of moments
Geometric distribution
Ggplot2, see R packages
Gini coefficient
standardized
Goodman and Kruskal's
Goodness of fit
adjusted measure
measure
test
Graph
bar chart
box plot
histogram
kernel density plot
Lorenz curve
pie chart
QQ-plot
random plot
scatter plot
Growth
factor
rate
H
Heteroscedasticity
Histogram
Homoscedasticity
Hypergeometric distribution
Hypothesis
alternative
linear
null
one-sided
two-sided
I
i.i.d.
Impossible event
Independence
pairwise
random variables
stochastic
Ineq, see R packages
Inequality
Tschebyschev
Inference
least squares
maximum likelihood
method of moments
Interaction
Intercept
Interquartile range
Interval estimation
J
Joint
cumulative distribution function
frequency distribution
probability distribution function
relative frequency distribution
K
Kernel density plot
Kolmogorov–Smirnov test
L
Laplace
experiment
probability
Lattice, see R packages
Least squares
Life time
Likelihood
Linear
hypotheses
Linear model
residuals
Linear regression
interaction
Line of equality
Location parameter
Log-linear model
Lorenz curve
M
Mann–Whitney U-Test
Marginal
distribution
frequency distribution
relative frequency distribution
MASS, see R packages
Matrix
covariance
design
Maximum likelihood estimation (MLE)
Mean
arithmetic
properties
weighted arithmetic
Mean squared error (MSE)
Measure
dispersion
symmetric
Measure of association
coefficient
contingency coefficient
correlation coefficient
Cramer's
odds ratio
rank correlation coefficient
relative risk
Memorylessness
Method of moments
Model
fitted regression model
fitted value
linear
log-linear
nonlinear
Multinomial distribution
Multiple linear regression
Multiplication theorem of probability
Multivariate
Mvtnorm, see R packages
N
Namibia
Newton–Raphson
Nominal variable
Normal distribution
O
Observation
Odds ratio
One-sample problem
Ordered
set
values
Ordinal variable
Outcome
P
Parameter
location
regression
space
PDF
joint
Percentile
Permutation
without replacement
with replacement
Pie chart
Plot
kernel density
QQ
random
scatter
trumpet
Poisson distribution
Polynomial regression
Population
Power
Probability
calculation rules
conditional
density function
Laplace
mass function
posterior
prior
Probability theory
axioms
-value
Q
QQ-plot
Quantile
Quartile
Quintile
R
R
adjusted
Random variables
bivariate
continuous
discrete
i.i.d
independence
standardization
Range
Realization
Real stories
Reference category
Regression
line
linear
multiple linear
polynomial
Regressor
Relationship
Relative risk
Residuals
standardized
Response
R packages
compositions
epitools
foreign
ggplot2
ineq
lattice
MASS
mvtnorm
ryouready
TeachingDemos
vcd
S
Sample
estimate
pooled variance
space
variance
Sampling
without replacement
Scale
absolute
continuous
interval
nominal
ordinal
ratio
Scatter plot
Set
ordered
unordered
Significance level
Simple event
Slope
Standard deviation
Standard error
Standardization
Standard normal distribution
Statistic
Step function
Stochasticconvergence
Stuart's
Sufficiency
Sure event
T
Table
contingency
frequency
of Stuart
T-distribution
Test
ANOVA
Binomial
for variance
of homogeneity
-goodness of fit
-independence
duality
equivalence
F
Fisher
Friedman
Kolmogorov–Smirnov
Kruskal–Wallis
Mann–Whitney U
McNemar
Mood
one-sample Gauss test
one-sample -test
one-sided
overall F
paired -test
sign
significance
two-sample binomial
two-sample Gauss test
two-sample -test
two-sided
U
Welch
Wilcoxon–Mann–Whitney
Wilcoxon rank sum
Theorem
additivity
additivity of variables
additivity of disjoint events
Bayes
central limit
i.i.d.
large numbers
law of total probability
multiplication for probabilities
Neyman–Fisher Factorization
PDF
standardization
Student
Tschebyschev
variance decomposition
Tie
Transformation
T-test
one-sample
paired
two-sample
Two-sample problem
U
Uniform distribution
continuous
discrete
Unit
Unordered set
U test
V
Variable
binary
bivariate random
categorical
continuous
dependent
discrete
dummy
grouped
independent
nominal
ordinal
random
response
standardized
Variance
additivity theorem
between classes
calculation rules
decomposition
dispersion
pooled
within classes
Vcd, see R packages
W
Welch test
Whiskers
Wilcoxon–Mann–Whitney test