Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

(Ebooks PDF) Download Fundamentals of Applied Probability and Random Processes Second Edition Oliver Ibe Full Chapters

Download as pdf or txt
Download as pdf or txt
You are on page 1of 84

Full download ebook at ebookname.

com

Fundamentals of Applied Probability and Random


Processes Second Edition Oliver Ibe

https://ebookname.com/product/fundamentals-of-applied-
probability-and-random-processes-second-edition-oliver-ibe/

OR CLICK BUTTON

DOWLOAD NOW

Download more ebook from https://ebookname.com


More products digital (pdf, epub, mobi) instant
download maybe you interests ...

Fundamentals of Stochastic Networks 1st Edition Oliver


C. Ibe

https://ebookname.com/product/fundamentals-of-stochastic-
networks-1st-edition-oliver-c-ibe/

Probability and random processes 1st Edition


Venkatarama Krishnan

https://ebookname.com/product/probability-and-random-
processes-1st-edition-venkatarama-krishnan/

Probability Statistics and Random Processes 3rd Edition


T. Veerarajan

https://ebookname.com/product/probability-statistics-and-random-
processes-3rd-edition-t-veerarajan/

Probability Random Variables and Stochastic Processes


4th Edition Athanasios Papoulis - Unnikrishna Pillai

https://ebookname.com/product/probability-random-variables-and-
stochastic-processes-4th-edition-athanasios-papoulis-unnikrishna-
pillai/
Applied probability and stochastic processes 2nd
Edition Beichelt F.

https://ebookname.com/product/applied-probability-and-stochastic-
processes-2nd-edition-beichelt-f/

Fundamentals of Probability with Stochastic Processes


3rd Edition Saeed Ghahramani

https://ebookname.com/product/fundamentals-of-probability-with-
stochastic-processes-3rd-edition-saeed-ghahramani/

Probability on graphs random processes on graphs and


lattices 1. publ., repr. with corr Edition Textbooks.

https://ebookname.com/product/probability-on-graphs-random-
processes-on-graphs-and-lattices-1-publ-repr-with-corr-edition-
textbooks/

Fundamentals of Renewable Energy Processes Second


Edition Aldo V. Da Rosa

https://ebookname.com/product/fundamentals-of-renewable-energy-
processes-second-edition-aldo-v-da-rosa/

Probability and Statistics by Example Volume 2 Markov


Chains A Primer in Random Processes and their
Applications v 2 1st Edition Yuri Suhov

https://ebookname.com/product/probability-and-statistics-by-
example-volume-2-markov-chains-a-primer-in-random-processes-and-
their-applications-v-2-1st-edition-yuri-suhov/
Fundamentals of Applied Probability
and Random Processes
This page intentionally left blank
Fundamentals of Applied
Probability and Random
Processes
2nd Edition

Oliver C. Ibe
University of Massachusetts, Lowell, Massachusetts

AMSTERDAM • BOSTON • HEIDELBERG • LONDON


NEW YORK • OXFORD • PARIS • SAN DIEGO
SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO
Academic Press is an imprint of Elsevier
Academic Press is an imprint of Elsevier
525 B Street, Suite 1900, San Diego, CA 92101-4495, USA
225 Wyman Street, Waltham, MA 02451, USA
Second edition 2014
Copyright © 2014, 2005 Elsevier Inc. All rights reserved.
No part of this publication may be reproduced, stored in a retrieval system or transmitted in
any form or by any means electronic, mechanical, photocopying, recording or otherwise
without the prior written permission of the publisher.
Permissions may be sought directly from Elsevier’s Science & Technology Rights Department
in Oxford, UK: phone (+44) (0) 1865 843830; fax (+44) (0) 1865 853333;
email: permissions@elsevier.com. Alternatively you can submit your request online by
visiting the Elsevier web site at http://elsevier.com/locate/permissions, and selecting
Obtaining permission to use Elsevier material.
Notice
No responsibility is assumed by the publisher for any injury and/or damage to persons
or property as a matter of products liability, negligence or otherwise, or from any use or
operation of any methods, products, instructions or ideas contained in the material herein.
Because of rapid advances in the medical sciences, in particular, independent verification
of diagnoses and drug dosages should be made.

Library of Congress Cataloging-in-Publication Data


Ibe, Oliver C. (Oliver Chukwudi), 1947-
Fundamentals of applied probability and random processes / Oliver Ibe. – Second edition.
pages cm
Includes bibliographical references and index.
ISBN 978-0-12-800852-2 (alk. paper)
1. Probabilities. I. Title.
QA273.I24 2014
519.2–dc23
2014005103
British Library Cataloguing in Publication Data
A catalogue record for this book is available from the British Library
For information on all Academic Press publications
visit our web site at store.elsevier.com
Printed and bound in USA
14 15 16 17 18 10 9 8 7 6 5 4 3 2 1
ISBN: 978-0-12-800852-2
Contents

ACKNOWLEDGMENT ................................................................................ xiv


PREFACE TO THE SECOND EDITION........................................................ xvi
PREFACE TO FIRST EDITION ................................................................... xix

CHAPTER 1 Basic Probability Concepts................................................... 1


1.1 Introduction .............................................................................. 1
1.2 Sample Space and Events ....................................................... 2
1.3 Definitions of Probability ......................................................... 4
1.3.1 Axiomatic Definition ..................................................... 4
1.3.2 Relative-Frequency Definition ..................................... 4
1.3.3 Classical Definition ...................................................... 4
1.4 Applications of Probability ....................................................... 6
1.4.1 Information Theory....................................................... 6
1.4.2 Reliability Engineering ................................................. 7
1.4.3 Quality Control ............................................................. 7
1.4.4 Channel Noise .............................................................. 8
1.4.5 System Simulation ....................................................... 8
1.5 Elementary Set Theory ............................................................ 9
1.5.1 Set Operations.............................................................. 9
1.5.2 Number of Subsets of a Set ...................................... 10
1.5.3 Venn Diagram............................................................. 10
1.5.4 Set Identities .............................................................. 11
1.5.5 Duality Principle......................................................... 13
1.6 Properties of Probability........................................................ 13
1.7 Conditional Probability........................................................... 14
1.7.1 Total Probability and the Bayes’ Theorem................ 16
1.7.2 Tree Diagram ............................................................. 22
1.8 Independent Events ............................................................... 26
1.9 Combined Experiments.......................................................... 29

v
vi Contents

1.10 Basic Combinatorial Analysis .............................................. 30


1.10.1 Permutations .......................................................... 30
1.10.2 Circular Arrangement ............................................ 32
1.10.3 Applications of Permutations in Probability.......... 33
1.10.4 Combinations.......................................................... 34
1.10.5 The Binomial Theorem........................................... 37
1.10.6 Stirling’s Formula .................................................. 37
1.10.7 The Fundamental Counting Rule........................... 38
1.10.8 Applications of Combinations in Probability.......... 40
1.11 Reliability Applications......................................................... 41
1.12 Chapter Summary ................................................................ 46
1.13 Problems .............................................................................. 46
Section 1.2 Sample Space and Events ............................. 46
Section 1.3 Definitions of Probability ............................... 47
Section 1.5 Elementary Set Theory .................................. 48
Section 1.6 Properties of Probability................................ 50
Section 1.7 Conditional Probability................................... 50
Section 1.8 Independent Events ....................................... 52
Section 1.10 Combinatorial Analysis ................................ 52
Section 1.11 Reliability Applications................................. 53
CHAPTER 2 Random Variables............................................................... 57
2.1 Introduction .......................................................................... 57
2.2 Definition of a Random Variable .......................................... 57
2.3 Events Defined by Random Variables.................................. 58
2.4 Distribution Functions.......................................................... 59
2.5 Discrete Random Variables ................................................. 61
2.5.1 Obtaining the PMF from the CDF ............................ 65
2.6 Continuous Random Variables ............................................ 67
2.7 Chapter Summary ................................................................ 72
2.8 Problems .............................................................................. 73
Section 2.4 Distribution Functions ................................... 73
Section 2.5 Discrete Random Variables ........................... 75
Section 2.6 Continuous Random Variables ...................... 77
CHAPTER 3 Moments of Random Variables .......................................... 81
3.1 Introduction .......................................................................... 81
3.2 Expectation ........................................................................... 82
3.3 Expectation of Nonnegative Random Variables ..................84
3.4 Moments of Random Variables and the Variance...............86
3.5 Conditional Expectations...................................................... 95
3.6 The Markov Inequality.......................................................... 96
3.7 The Chebyshev Inequality .................................................... 97
Contents vii

3.8 Chapter Summary ................................................................ 98


3.9 Problems .............................................................................. 98
Section 3.2 Expected Values ............................................. 98
Section 3.4 Moments of Random Variables and the
Variance........................................................ 100
Section 3.5 Conditional Expectations ............................. 101
Sections 3.6 and 3.7 Markov and Chebyshev
Inequalities .................................... 102
CHAPTER 4 Special Probability Distributions .......................................103
4.1 Introduction ........................................................................ 103
4.2 The Bernoulli Trial and Bernoulli Distribution ................. 103
4.3 Binomial Distribution ......................................................... 105
4.4 Geometric Distribution....................................................... 108
4.4.1 CDF of the Geometric Distribution ........................ 111
4.4.2 Modified Geometric Distribution............................ 111
4.4.3 “Forgetfulness” Property of the Geometric
Distribution ............................................................. 112
4.5 Pascal Distribution............................................................. 113
4.5.1 Distinction Between Binomial and Pascal
Distributions ........................................................... 117
4.6 Hypergeometric Distribution ............................................. 118
4.7 Poisson Distribution........................................................... 122
4.7.1 Poisson Approximation of the Binomial
Distribution ............................................................. 123
4.8 Exponential Distribution..................................................... 124
4.8.1 “Forgetfulness” Property of the Exponential
Distribution ............................................................. 126
4.8.2 Relationship between the Exponential and
Poisson Distributions ............................................. 127
4.9 Erlang Distribution ............................................................. 128
4.10 Uniform Distribution .......................................................... 133
4.10.1 The Discrete Uniform Distribution ...................... 134
4.11 Normal Distribution ........................................................... 135
4.11.1 Normal Approximation of the Binomial
Distribution ........................................................... 138
4.11.2 The Error Function............................................... 139
4.11.3 The Q-Function ..................................................... 140
4.12 The Hazard Function.......................................................... 141
4.13 Truncated Probability Distributions................................... 143
4.13.1 Truncated Binomial Distribution.......................... 145
4.13.2 Truncated Geometric Distribution ....................... 145
viii Contents

4.13.3 Truncated Poisson Distribution ........................... 145


4.13.4 Truncated Normal Distribution............................ 146
4.14 Chapter Summary .............................................................. 146
4.15 Problems ............................................................................ 147
Section 4.3 Binomial Distribution ................................... 147
Section 4.4 Geometric Distribution................................. 151
Section 4.5 Pascal Distribution....................................... 152
Section 4.6 Hypergeometric Distribution ....................... 153
Section 4.7 Poisson Distribution..................................... 154
Section 4.8 Exponential Distribution .............................. 154
Section 4.9 Erlang Distribution....................................... 156
Section 4.10 Uniform Distribution .................................. 157
Section 4.11 Normal Distribution ................................... 158
CHAPTER 5 Multiple Random Variables ...............................................159
5.1 Introduction ........................................................................ 159
5.2 Joint CDFs of Bivariate Random Variables ....................... 159
5.2.1 Properties of the Joint CDF ................................... 159
5.3 Discrete Bivariate Random Variables................................ 160
5.4 Continuous Bivariate Random Variables........................... 163
5.5 Determining Probabilities from a Joint CDF..................... 165
5.6 Conditional Distributions ................................................... 168
5.6.1 Conditional PMF for Discrete Bivariate
Random Variables .................................................. 168
5.6.2 Conditional PDF for Continuous Bivariate
Random Variables .................................................. 169
5.6.3 Conditional Means and Variances.......................... 170
5.6.4 Simple Rule for Independence .............................. 171
5.7 Covariance and Correlation Coefficient............................. 172
5.8 Multivariate Random Variables.......................................... 176
5.9 Multinomial Distributions .................................................. 177
5.10 Chapter Summary .............................................................. 179
5.11 Problems ............................................................................ 179
Section 5.3 Discrete Bivariate Random Variables ......... 179
Section 5.4 Continuous Bivariate Random Variables..... 180
Section 5.6 Conditional Distributions ............................. 182
Section 5.7 Covariance and Correlation Coefficient ...... 183
Section 5.9 Multinomial Distributions ............................ 183
CHAPTER 6 Functions of Random Variables ........................................185
6.1 Introduction ........................................................................ 185
6.2 Functions of One Random Variable ................................... 185
6.2.1 Linear Functions .................................................... 185
Contents ix

6.2.2 Power Functions .................................................... 187


6.3 Expectation of a Function of One Random Variable ......... 188
6.3.1 Moments of a Linear Function............................... 188
6.3.2 Expected Value of a Conditional Expectation ........ 189
6.4 Sums of Independent Random Variables .......................... 189
6.4.1 Moments of the Sum of Random Variables .......... 196
6.4.2 Sum of Discrete Random Variables....................... 197
6.4.3 Sum of Independent Binomial Random
Variables ................................................................. 200
6.4.4 Sum of Independent Poisson Random Variables .. 201
6.4.5 The Spare Parts Problem ...................................... 201
6.5 Minimum of Two Independent Random Variables ............ 204
6.6 Maximum of Two Independent Random Variables ........... 205
6.7 Comparison of the Interconnection Models ...................... 207
6.8 Two Functions of Two Random Variables ......................... 209
6.8.1 Application of the Transformation Method ........... 210
6.9 Laws of Large Numbers .................................................... 212
6.10 The Central Limit Theorem ............................................... 214
6.11 Order Statistics .................................................................. 215
6.12 Chapter Summary .............................................................. 219
6.13 Problems ............................................................................ 219
Section 6.2 Functions of One Random Variable............. 219
Section 6.4 Sums of Random Variables ......................... 220
Sections 6.4 and 6.5 Maximum and Minimum of
Independent Random Variables.... 221
Section 6.8 Two Functions of Two Random Variables... 222
Section 6.10 The Central Limit Theorem ....................... 222
Section 6.11 Order Statistics .......................................... 223
CHAPTER 7 Transform Methods ...........................................................225
7.1 Introduction ........................................................................ 225
7.2 The Characteristic Function .............................................. 225
7.2.1 Moment-Generating Property of the
Characteristic Function.......................................... 226
7.2.2 Sums of Independent Random Variables .............. 227
7.2.3 The Characteristic Functions of Some
Well-Known Distributions ...................................... 228
7.3 The S-Transform................................................................. 231
7.3.1 Moment-Generating Property of the s-Transform 231
7.3.2 The s-Transform of the PDF of the Sum of
Independent Random Variables............................. 232
7.3.3 The s-Transforms of Some Well-Known PDFs..... 232
x Contents

7.4 The Z-Transform .................................................................. 236


7.4.1 Moment-Generating Property of the
z-Transform ............................................................. 239
7.4.2 The z-Transform of the PMF of the Sum of
Independent Random Variables............................... 240
7.4.3 The z-Transform of Some Well-Known PMFs ........ 240
7.5 Random Sum of Random Variables .................................... 242
7.6 Chapter Summary ................................................................ 246
7.7 Problems .............................................................................. 247
Section 7.2 Characteristic Functions ............................... 247
Section 7.3 s-Transforms ................................................. 247
Section 7.4 z-Transforms ................................................. 249
Section 7.5 Random Sum of Random Variables .............. 250
CHAPTER 8 Introduction to Descriptive Statistics................................253
8.1 Introduction .......................................................................... 253
8.2 Descriptive Statistics ........................................................... 255
8.3 Measures of Central Tendency............................................ 255
8.3.1 Mean ......................................................................... 256
8.3.2 Median ...................................................................... 256
8.3.3 Mode ......................................................................... 257
8.4 Measures of Dispersion ....................................................... 257
8.4.1 Range........................................................................ 257
8.4.2 Quartiles and Percentiles ........................................ 258
8.4.3 Variance .................................................................... 259
8.4.4 Standard Deviation ................................................... 259
8.5 Graphical and Tabular Displays........................................... 261
8.5.1 Dot Plots................................................................... 261
8.5.2 Frequency Distribution ............................................ 262
8.5.3 Histograms ............................................................... 263
8.5.4 Frequency Polygons................................................. 263
8.5.5 Bar Graphs ............................................................... 264
8.5.6 Pie Chart................................................................... 265
8.5.7 Box and Whiskers Plot............................................. 266
8.6 Shape of Frequency Distributions: Skewness .................... 269
8.7 Shape of Frequency Distributions: Peakedness ................. 271
8.8 Chapter Summary ................................................................ 272
8.9 Problems .............................................................................. 273
Section 8.3 Measures of Central Tendency ..................... 273
Section 8.4 Measures of Dispersion................................. 273
Section 8.6 Graphical Displays ......................................... 274
Section 8.7 Shape of Frequency Distribution................... 274
Contents xi

CHAPTER 9 Introduction to Inferential Statistics .................................275


9.1 Introduction ........................................................................ 275
9.2 Sampling Theory ................................................................ 276
9.2.1 The Sample Mean................................................... 277
9.2.2 The Sample Variance ............................................. 279
9.2.3 Sampling Distributions........................................... 280
9.3 Estimation Theory .............................................................. 281
9.3.1 Point Estimate, Interval Estimate, and Confidence
Interval.................................................................... 283
9.3.2 Maximum Likelihood Estimation ........................... 285
9.3.3 Minimum Mean Squared Error Estimation ........... 289
9.4 Hypothesis Testing............................................................. 291
9.4.1 Hypothesis Test Procedure.................................... 291
9.4.2 Type I and Type II Errors........................................ 292
9.4.3 One-Tailed and Two-Tailed Tests.......................... 293
9.5 Regression Analysis........................................................... 298
9.6 Chapter Summary .............................................................. 301
9.7 Problems ............................................................................ 302
Section 9.2 Sampling Theory .......................................... 302
Section 9.3 Estimation Theory ........................................ 303
Section 9.4 Hypothesis Testing....................................... 303
Section 9.5 Regression Analysis..................................... 304
CHAPTER 10 Introduction to Random Processes...................................307
10.1 Introduction ........................................................................ 307
10.2 Classification of Random Processes ................................. 308
10.3 Characterizing a Random Process .................................... 309
10.3.1 Mean and Autocorrelation Function .................... 309
10.3.2 The Autocovariance Function............................... 310
10.4 Crosscorrelation and Crosscovariance Functions ............ 311
10.4.1 Review of Some Trigonometric Identities ........... 312
10.5 Stationary Random Processes........................................... 314
10.5.1 Strict-Sense Stationary Processes ...................... 314
10.5.2 Wide-Sense Stationary Processes....................... 315
10.6 Ergodic Random Processes............................................... 321
10.7 Power Spectral Density ..................................................... 323
10.7.1 White Noise .......................................................... 328
10.8 Discrete-Time Random Processes.................................... 329
10.8.1 Mean, Autocorrelation Function and
Autocovariance Function...................................... 329
10.8.2 Power Spectral Density of a Random Sequence.... 330
10.8.3 Sampling of Continuous-Time Processes ........... 331
xii Contents

10.9 Chapter Summary.............................................................. 333


10.10 Problems............................................................................ 334
Section 10.3 Mean, Autocorrelation Function and
Autocovariance Function ........................... 334
Section 10.4 Crosscorrelation and Crosscovariance
Functions.................................................... 335
Section 10.5 Wide-Sense Stationary Processes ............ 336
Section 10.6 Ergodic Random Processes....................... 339
Section 10.7 Power Spectral Density ............................. 339
Section 10.8 Discrete-Time Random Processes............ 342
CHAPTER 11 Linear Systems with Random Inputs ................................345
11.1 Introduction ........................................................................ 345
11.2 Overview of Linear Systems with Deterministic Inputs.... 345
11.3 Linear Systems with Continuous-Time Random Inputs ... 347
11.4 Linear Systems with Discrete-Time Random Inputs........ 352
11.5 Autoregressive Moving Average Process.......................... 354
11.5.1 Moving Average Process...................................... 355
11.5.2 Autoregressive Process ....................................... 357
11.5.3 ARMA Process ...................................................... 360
11.6 Chapter Summary .............................................................. 361
11.7 Problems ............................................................................ 361
Section 11.2 Linear Systems with Deterministic
Input............................................................ 361
Section 11.3 Linear Systems with Continuous
Random Input............................................. 362
Section 11.4 Linear Systems with Discrete
Random Input............................................. 365
Section 11.5 Autoregressive Moving
Average Processes .................................... 367
CHAPTER 12 Special Random Processes ...............................................369
12.1 Introduction ........................................................................ 369
12.2 The Bernoulli Process ....................................................... 369
12.3 Random Walk Process....................................................... 371
12.3.1 Symmetric Simple Random Walk ........................ 372
12.3.2 Gambler’s Ruin..................................................... 373
12.4 The Gaussian Process ....................................................... 375
12.4.1 White Gaussian Noise Process ............................ 377
12.5 Poisson Process................................................................. 378
12.5.1 Counting Processes ............................................. 378
12.5.2 Independent Increment Processes...................... 379
12.5.3 Stationary Increments.......................................... 379
Contents xiii

12.5.4 Definitions of a Poisson Process ......................... 380


12.5.5 Interarrival Times for the Poisson Process ........ 381
12.5.6 Conditional and Joint PMFs for Poisson
Processes ............................................................. 382
12.5.7 Compound Poisson Process ................................ 383
12.5.8 Combinations of Independent Poisson
Processes ............................................................. 385
12.5.9 Competing Independent Poisson Processes....... 386
12.5.10 Subdivision of a Poisson Process and the
Filtered Poisson Process ..................................... 387
12.5.11 Random Incidence................................................ 388
12.6 Markov Processes.............................................................. 391
12.7 Discrete-Time Markov Chains ........................................... 393
12.7.1 State Transition Probability Matrix ...................... 393
12.7.2 The n-Step State Transition Probability .............. 393
12.7.3 State Transition Diagrams ................................... 395
12.7.4 Classification of States......................................... 396
12.7.5 Limiting-State Probabilities ................................. 399
12.7.6 Doubly Stochastic Matrix ..................................... 402
12.8 Continuous-Time Markov Chains ...................................... 404
12.8.1 Birth and Death Processes .................................. 406
12.9 Gambler’s Ruin as a Markov Chain ................................... 409
12.10 Chapter Summary .............................................................. 411
12.11 Problems ............................................................................ 411
Section 12.2 Bernoulli Process ...................................... 411
Section 12.3 Random Walk ............................................. 413
Section 12.4 Gaussian Process....................................... 414
Section 12.5 Poisson Process......................................... 415
Section 12.7 Discrete-Time Markov Chains ................... 418
Section 12.8 Continuous-Time Markov Chains .............. 423

APPENDIX................................................................................................ 427
BIBLIOGRAPHY........................................................................................ 429
INDEX ...................................................................................................... 431
Acknowledgment

The first edition of this book was well received by many students and profes-
sors. It had both Indian and Korean editions and received favorable reviews
that include the following: “The book is very clear, with many nice examples
and with mathematical proofs whenever necessary. Author did a good job!
The book is one of the best ones for self-study.” Another comment is the
following:
“This book is written for professional engineers and students who want to do
self-study on probability and random processes. I have spent money and time
in reading several books on this topic and almost all of them are not for self-
study. They lack real world examples and have end-of-chapter problems that
are boring with mathematical proofs. In this book the concepts are explained
by taking real world problems and describing clearly how to model them.
Topics are well-balanced; the depth of material and explanations are very
good, the problem sets are intuitive and they make you to think, which makes
this book unique and suitable for self-study. Topics which are required for
both grad and undergrad courses are covered in this book very well. If you are
reading other books and are stuck when applying a concept to a particular
problem, you should consider this book. Problem solving is the only way to get
familiar with different probability models, and this book fulfills that by taking
problems in the real world examples as opposed to some theoretical proofs.”

These are encouraging reviews of a book that evolved from a course I teach at
the University of Massachusetts, Lowell. I am very grateful to those who wrote
these wonderful unsolicited anonymous reviews in Amazon.com. Their obser-
vations on the structure of the book are precisely what I had in mind in writing
the book.
I want to extend my sincere gratitude to my editor, Paula Callaghan of Elsevier,
who was instrumental in the production of this book. I thank her for the effort
she made to get the petition for the second edition approved. I also want to
thank Jessica Vaughan, the Editorial Project Manager, for her ensuring timely
production of the book.
xiv
Acknowledgment xv

So many students have used the first edition of this book at UMass Lowell and
have provided useful information that led to more clarity in the presentation of
the material in the book. They are too many to name individually, so I say
“thank you” to all of them as a group.
Finally, I would like to thank my wife, Christina, for patiently bearing with me
while the book was being revised. I would also like to appreciate the encour-
agement of our children Chidinma, Ogechi, Amanze and Ugonna. As always,
they are a source of joy to me and my wife.
Preface to the Second Edition

Many systems encountered in science and engineering require an understand-


ing of probability concepts because they possess random variations. These
include messages arriving at a switchboard; customers arriving at a restaurant,
movie theatre or a bank; component failure in a system; traffic arrival at a junc-
tion; and transaction requests arriving at a server.
There are several books on probability and random processes. These books
range widely in their coverage and depth. At one extreme are the very rigorous
books that present probability from the point of view of measure theory and
spend so much time proving exotic theorems. At the other extreme are books
that combine probability with statistics without devoting enough time on the
applications of probability. In the middle lies a group of books that combine
probability and random processes. These books avoid the measure theoretic
approach and rather emphasize the axioms upon which the theory is based.
This book belongs to this group and is based on the premise that to the engi-
neer, probability is a modeling tool. Therefore, to an engineering student the
emphasis on a probability and random processes course should be on the
application of probability to the solution of engineering problems. Also, since
some of the engineering problems deal with data analysis, the student should
also be exposed to some knowledge of statistics. However, it is not necessary
for the student to take a separate class on statistics since most of the prereq-
uisites for statistics are covered in a probability course. Thus, this book differs
from other books in the sense that it presents two chapters on the essentials of
statistics.
The book is designed for juniors and seniors, but can also be used at lower grad-
uate levels. It grew out of the author’s fifteen years experience developing and
analyzing probabilistic models of systems in the industry as well as teaching an
introductory course on probability and random processes for over ten years in
two different colleges. The emphasis throughout the book is on the applica-
tions of probability, which are demonstrated through several examples that
deal with real systems. Since many students learn by “doing,” it is suggested that

xvi
Preface to the Second Edition xvii

the students solve the exercises at the end of each chapter. Some mathematical
knowledge is assumed, especially freshman calculus and algebra.
This second edition of the book differs from the first edition in a few ways. First,
the chapters have been slightly rearranged. Specifically, statistics now comes
before random processes to enable students understand the basic principles
of probability and statistics before studying random processes. Second,
Chapter 11 has been split into two chapters: Chapter 8, which deals with
descriptive statistics; and Chapter 9, which deals with inferential statistics.
Third, the new edition includes more application-oriented examples to enable
students to appreciate the application of probability and random processes in
science, engineering and management. Finally, after teaching the subject every
semester for the past eleven years, I have been able to identify several pain
points that hinder student understanding of probability and random processes,
and I have introduced several new “smart” methods of solving the problems to
help ease the pain.
The book is divided into three parts as follows:

Part 1: Probability and Random Variables, which covers chapters 1 to 7


Part 2: Introduction to Statistics, which covers chapters 8 and 9
Part 3: Basic Random Processes, which covers chapters 10 to 12

A more detailed description of the chapters is as follows. Chapter 1 deals with


basic concepts in probability including sample space and events, elementary set
theory, conditional probability, independent events, basic combinatorial anal-
ysis, and applications of probability.
Chapter 2 discusses random variables including events defined by random vari-
ables, discrete random variables, continuous random variables, cumulative dis-
tribution function, probability mass function of discrete random variables, and
probability distribution function of continuous random variables.
Chapter 3 discusses moments of random variables including the concepts of
expectation and variance, higher moments, conditional expectation, and the
Chebyshev and Markov inequalities.
Chapter 4 discusses special random variables and their distributions. These include
the Bernoulli distribution, binomial distribution, geometric distribution, Pascal
distribution, hypergeometric distribution, Poisson distribution, exponential
distribution, Erlang distribution, uniform distribution, and normal distribution.
Chapter 5 deals with multiple random variables including the joint cumulative
distribution function of bivariate random variables, conditional distributions,
covariance, correlation coefficient, functions of multivariate random variables,
and multinomial distributions.
xviii Preface to the Second Edition

Chapter 6 deals with functions of random variables including linear and power
functions of one random variable, moments of functions of one random var-
iable, sums of independent random variables, the maximum and minimum of
two independent random variables, two functions of two random variables,
laws of large numbers, the central limit theorem, and order statistics
Chapter 7 discusses transform methods that are useful in computing moments
of random variables. In particular, it discusses the characteristic function, the
z-transform of the probability mass functions of discrete random variables
and the s-transform of the probability distribution functions of continuous ran-
dom variables.
Chapter 8 presents an introduction to descriptive statistics and discusses such
topics as measures of central tendency, measures of spread, and graphical
displays.
Chapter 9 presents an introduction to inferential statistics and discusses such
topics as sampling theory, estimation theory, hypothesis testing, and linear
regression analysis.
Chapter 10 presents an introduction to random processes. It discusses classifi-
cation of random processes; characterization of random processes including
the autocorrelation function of a random process, autocovariance function,
crosscorrelation function and crosscovariance function; stationary random
processes; ergodic random processes; and power spectral density.
Chapter 11 discusses linear systems with random inputs. It also discusses the
autoregressive moving average process.
Chapter 12 discusses special random processes including the Bernoulli process,
Gaussian process, random walk, Poisson process and Markov process.
The author has tried different formats in presenting the different chapters of the
book. In one particular semester we were able to go through all the chapters except
Chapter 12. However, it was discovered that this put a lot of stress on the students.
Thus, in subsequent semesters an attempt was made to cover all the topics in
Parts 1 and 2 of the book, and a few selections from Part 3. The instructor can
try different formats and adopt the one that works best for him or her.
The beginning of a solved example is indicated by a short line and the end of the
solution is also indicated by a short line. This is to separate the continuation of
a discussion preceding an example from the example just solved.
Preface to First Edition

Many systems encountered in science and engineering require an understand-


ing of probability concepts because they possess random variations. These
include messages arriving at a switchboard; customers arriving at a restaurant,
movie theatre or a bank; component failure in a system; traffic arrival at a junc-
tion; and transaction requests arriving at a server.
There are several books on probability and random processes. These books
range widely in their coverage and depth. At one extreme are the very rigorous
books that present probability from the point of view of measure theory and
spend so much time proving exotic theorems. At the other extreme are books
that combine probability with statistics without devoting enough time on the
applications of probability. In the middle lies a group of books that combine
probability and random processes. These books avoid the measure theoretic
approach and rather emphasize the axioms upon which the theory is based.
This book belongs to this group and is based on the premise that to the engi-
neer, probability is a modeling tool. Therefore, to an engineering student the
emphasis on a probability and random processes course should be on the
application of probability to the solution of engineering problems. Also, since
some of the engineering problems deal with data analysis, the student should
also be exposed to some knowledge of statistics. However, it is not necessary for
the student to take a separate class on statistics since most of the prerequisites
for statistics are covered in a probability course. Thus, this book differs from
other books in the sense that it presents a chapter on the essentials of statistics.
The book is designed for juniors and seniors, but can also be used at lower grad-
uate levels. It grew out of the author’s fifteen years experience developing and
analyzing probabilistic models of systems in the industry as well as teaching an
introductory course on probability and random processes for over four years in
two different colleges. The emphasis throughout the book is on the applica-
tions of probability, which are demonstrated through several examples that
deal with real systems. Since many students learn by “doing,” it is suggested that
the students solve the exercises at the end of each chapter. Some mathematical

xix
xx Preface to First Edition

knowledge is assumed, especially freshman calculus and algebra. The book is


divided into three parts as follows:

Part 1: Probability and Random Variables, which covers chapters 1 to 7


Part 2: Basic Random Processes, which covers chapters 8 to 11
Part 3: Introduction to Statistics, which covers chapter 12.

A more detailed description of the chapters is as follows. Chapter 1 deals with


basic concepts in probability including sample space and events, elementary set
theory, conditional probability, independent events, basic combinatorial
analysis, and applications of probability.
Chapter 2 discusses random variables including events defined by random vari-
ables, discrete random variables, continuous random variables, cumulative dis-
tribution function, probability mass function of discrete random variables, and
probability distribution function of continuous random variables.
Chapter 3 deals with moments of random variables including the concepts of
expectation and variance, higher moments, conditional expectation, and the
Chebyshev and Markov inequalities.
Chapter 4 discusses special random variables and their distributions. These
include the Bernoulli distribution, binomial distribution, geometric distri-
bution, Pascal distribution, hypergeometric distribution, Poisson distribution,
exponential distribution, Erlang distribution, uniform distribution, and
normal distribution.
Chapter 5 deals with multiple random variables including the joint cumulative
distribution function of bivariate random variables, conditional distributions,
covariance, correlation coefficient, many random variables, and multinomial
distribution.
Chapter 6 deals with functions of random variables including linear and power
functions of one random variable, moments of functions of one random var-
iable, sums of independent random variables, the maximum and minimum of
two independent random variables, two functions of two random variables,
laws of large numbers, the central limit theorem, and order statistics
Chapter 7 discusses transform methods that are useful in computing moments
of random variables. In particular, it discusses the characteristic function, the
z-transform of the probability mass functions of discrete random variables
and the s-transform of the probability distribution functions of continuous
random variables.
Chapter 8 presents an introduction to random processes. It discusses classifica-
tion of random processes; characterization of random processes including the
autocorrelation function of a random process, autocovariance function,
Preface to First Edition xxi

crosscorrelation function and crosscovariance function; stationary random


processes; ergodic random processes; and power spectral density.
Chapter 9 discusses linear systems with random inputs.
Chapter 10 discusses such specialized random processes as the Gaussian pro-
cess, random walk, and Poisson process and Markov process
Chapter 11 presents an introduction to statistics and discusses such topics as
sampling theory, estimation theory, hypothesis testing, and linear regression.
The author has tried different formats in presenting the different chapters of the
book. In one particular semester we were able to go through all the chapters.
However, it was discovered that this put a lot of stress on the students. Thus,
in subsequent semesters an attempt was made to cover all the topics in Part 1
of the book, chapters 8 and 9, and a few selections from the other chapters.
The instructor can try different formats and adopt the one that works best for
him or her.
The symbol Δ is used to indicate the end of the solution to an example. This is
to separate the continuation of a discussion preceding an example from the
example just solved.
This page intentionally left blank
CHAPTER 1

Basic Probability Concepts

1.1 INTRODUCTION
Probability deals with unpredictability and randomness, and probability the-
ory is the branch of mathematics that is concerned with the study of random
phenomena. A random phenomenon is one that, under repeated observation,
yields different outcomes that are not deterministically predictable. However,
these outcomes obey certain conditions of statistical regularity whereby the
relative frequency of occurrence of the possible outcomes is approximately
predictable. Examples of these random phenomena include the number of
electronic mail (e-mail) messages received by all employees of a company
in one day, the number of phone calls arriving at the university’s switchboard
over a given period, the number of components of a system that fail within a
given interval, and the number of A’s that a student can receive in one
academic year.
According to the preceding definition, the fundamental issue in random phe-
nomena is the idea of a repeated experiment with a set of possible outcomes or
events. Associated with each of these events is a real number called the proba-
bility of the event that is related to the frequency of occurrence of the event in a
long sequence of repeated trials of the experiment. In this way it becomes obvi-
ous that the probability of an event is a value that lies between zero and one,
and the sum of the probabilities of the events for a particular experiment should
sum to one.
This chapter begins with events associated with a random experiment. Then it
provides different definitions of probability and considers elementary set theory
and algebra of sets. Also, it discusses basic concepts in combinatorial analysis
that will be used in many of the later chapters. Finally, it discusses how probability
is used to compute the reliability of different component configurations in a
system.

Fundamentals of Applied Probability and Random Processes. http://dx.doi.org/10.1016/B978-0-12-800852-2.00001-8


Copyright © 2014 Elsevier Inc. All rights reserved.
2 C HA PT E R 1 Basic Probability Concepts

1.2 SAMPLE SPACE AND EVENTS


The concepts of experiments and events are very important in the study of
probability. In probability, an experiment is any process of trial and observa-
tion. An experiment whose outcome is uncertain before it is performed is
called a random experiment. When we perform a random experiment, the col-
lection of possible elementary outcomes is called the sample space of the exper-
iment, which is usually denoted by Ω. We define these outcomes as
elementary outcomes because exactly one of the outcomes occurs when the
experiment is performed. The elementary outcomes of an experiment are
called the sample points of the sample space and are denoted by wi,
i ¼ 1, 2, . . .. If there are n possible outcomes of an experiment, then the sample
space is Ω ¼ {w1, w2, . . ., wn}.
An event is the occurrence of either a prescribed outcome or any one of a num-
ber of possible outcomes of an experiment. Thus, an event is a subset of the
sample space. For example, if we toss a die, any number from 1 to 6 may appear.
Therefore, in this experiment the sample space is defined by

Ω ¼ f1, 2, 3, 4, 5, 6g (1.1)

The event “the outcome of the toss of a die is an even number” is the subset of Ω
and is defined by
E ¼ f2, 4, 6g (1.2)

For a second example, consider a coin tossing experiment in which each toss
can result in either a head (H) or tail (T). If we toss a coin three times and
let the triplet xyz denote the outcome “x on first toss, y on second toss and z
on third toss,” then the sample space of the experiment is

Ω ¼ fHHH, HHT, HTH, HTT, THH, THT, TTH, TTT g (1.3)

The event “one head and two tails” is the subset of Ω and is defined by

E ¼ fHTT, THT, TTHg (1.4)

Other examples of events are as follows:

n In a single coin toss experiment with sample space Ω ¼ {H, T}, the event
E ¼ {H} is the event that a head appears on the toss and E ¼ {T} is the
event that a tail appears on the toss.
n If we toss a coin twice and let xy denote the outcome “x on first toss and y
on second toss,” where x is head or tail and y is head or tail, then the
1.2 Sample Space and Events 3

sample space is Ω ¼ {HH, HT, TH, TT}. The event E ¼ {HT, TT} is the event
that a tail appears on second toss.
n If we measure the lifetime of an electronic component, such as a chip, the
sample space consists of all nonnegative real numbers. That is,
Ω ¼ fxj0  x < 1g

The event that the lifetime is not more than 7 hours is defined as follows:
E ¼ fxj0  x  7g

n If we toss a die twice and let the pair (x, y) denote the outcome “x on first
toss and y on second toss,” then the sample space is
8 9
>
> ð1, 1Þ ð1, 2Þ ð1, 3Þ ð1, 4Þ ð1, 5Þ ð1, 6Þ >
>
>
> >
>
>
> ð2, 1Þ ð2, 2Þ ð2, 3Þ ð2, 4Þ ð2, 5Þ ð2, 6Þ >
>
>
> >
>
>
> >
>
< ð3, 1Þ ð3, 2Þ ð3, 3Þ ð3, 4Þ ð3, 5Þ ð3, 6Þ =
Ω¼ (1.5)
> ð4, 1Þ
> >
ð4, 2Þ ð4, 3Þ ð4, 4Þ ð4, 5Þ ð4, 6Þ >
>
> >
>
>
> >
> ð5, 1Þ
>
> ð5, 2Þ ð5, 3Þ ð5, 4Þ ð5, 5Þ ð5, 6Þ >
>
>
>
>
: >
;
ð6, 1Þ ð6, 2Þ ð6, 3Þ ð6, 4Þ ð6, 5Þ ð6, 6Þ

The event that the sum of the two tosses is 8 is denoted by


E ¼ fð2, 6Þ, ð3, 5Þ, ð4, 4Þ, ð5, 3Þ, ð6, 2Þg

For any two events A and B of a sample space Ω, we can define the following
new events:

n A [ B is event that consists of all sample points that are either in A or


in B or in both A and B. The event A [ B is called the union of events A
and B.
n A \ B is event that consists of all sample points that are in both A and B.
The event A \ B is called the intersection of events A and B. Two events
are defined to be mutually exclusive if their intersection does not contain a
sample point; that is, they have no outcomes in common. Events A1, A2,
A3, . . . are defined to be mutually exclusive if no two of them have any
outcomes in common and the events collectively have no outcomes in
common.
n A  B ¼ A\B is event that consists of all sample points that are in A but not
in B. The event A  B (also denoted by A\B) is called the difference of
events A and B. Note that A  B is different from B  A.

The algebra of unions, intersections and differences of events will be discussed


in greater detail when we study set theory later in this chapter.
4 C HA PT E R 1 Basic Probability Concepts

1.3 DEFINITIONS OF PROBABILITY


There are several ways to define probability. In this section we consider three
definitions; namely the axiomatic definition, the relative-frequency definition,
and the classical definition.

1.3.1 Axiomatic Definition


Consider a random experiment whose sample space is Ω. For each event A of Ω
we assume that a number P[A], called the probability of event A, is defined such
that the following hold:

1. Axiom 1: 0  P[A]  1, which means that the probability of A is some


number between and including 0 and 1.
2. Axiom 2: P[Ω] ¼ 1, which states that with probability 1 the outcome will
be a sample point in the sample space.
3. Axiom 3: For any set of n mutually exclusive events A1, A2, A3, . . ., An
defined on the same sample space,

P ½A1 [ A2 [ A3 [ .. . [ An  ¼ P½A1  + P ½A2  + P ½A3  +   + P½An  (1.6)

That is, for any set of mutually exclusive events defined on the same space,
the probability of at least one of these events occurring is the sum of their
respective probabilities.

1.3.2 Relative-Frequency Definition


Consider a random experiment that is performed n times. If an event A occurs
nA times, then the probability of event A, P[A], is defined as follows:
nn o
A
P½A ¼ lim (1.7)
n!1 n

The ratio nA/n is called the relative frequency of event A. While the relative-
frequency definition of probability is intuitively satisfactory for many practical
problems, it has a few limitations. One such limitation is the fact that the exper-
iment may not be repeatable, especially when we are dealing with destructive
testing of expensive and/or scarce resources. Also, the limit may not exist.

1.3.3 Classical Definition


In the classical definition, the probability P[A] of an event A is the ratio of the
number of outcomes NA of an experiment that are favorable to A to the total
number N of possible outcomes of the experiment. That is,
NA
P ½ A ¼ (1.8)
N
1.3 Definitions of Probability 5

This probability is determined a priori without actually performing the experi-


ment. For example, in a coin toss experiment, there are two possible outcomes:
heads or tails. Thus, N ¼ 2 and if the coin is fair the probability of the event that
the toss comes up heads is 1/2.

EXAMPLE 1.1
Two fair dice are tossed. Find the probability of each of the following events:

a. The sum of the outcomes of the two dice is equal to 7


b. The sum of the outcomes of the two dice is equal to 7 or 11
c. The outcome of second die is greater than the outcome of the first die
d. Both dice come up with even numbers

Solution:
We first define the sample space of the experiment. If we let the pair (x, y) denote the outcome “first
die comes up x and second die comes up y,” then the sample space is given by equation (1.5). The
total number of sample points is 36. We evaluate the three probabilities using the classical
definition method.

a. Let A1 denote the event that the sum of the outcomes of the two dice is equal to seven.
Then A1 ¼ {(1, 6), (2, 5), (3, 4), (4, 3), (5, 2), (6, 1)}. Since the number of sample points in the
event is 6, we have that P[A1] ¼ 6/36 ¼ 1/6.
b. Let B denote the event that the sum of the outcomes of the two dice is either seven or
eleven, let A1 be as defined in part (a), and let A2 denote the event that the sum of the
outcomes of the two dice is eleven. Then, A2 ¼ {(6, 5), (5, 6)} with 2 sample points. Thus,
P[A2] ¼ 2/36 ¼ 1/18. Since B is the union of A1 and A2, which are mutually exclusive events,
we obtain

1 1 2
P½B ¼ P½A1 [ A2  ¼ P½A1  + P½A2  ¼ + ¼
6 18 9

c. Let C denote the event that the outcome of the second die is greater than the outcome of
the first die. Then
( )
ð1, 2Þ,ð1,3Þ,ð1,4Þ,ð1,5Þ,ð1,6Þ,ð2,3Þ,ð2,4Þ,ð2,5Þ,

ð2, 6Þ,ð3,4Þ,ð3,5Þ,ð3,6Þ,ð4,5Þ,ð4,6Þ,ð5,6Þ

with 15 sample points. Thus, P[C] ¼ 15/36 ¼ 5/12.


d. Let D denote the event that both dice come up with even numbers. Then

D ¼ fð2, 2Þ, ð2, 4Þ, ð2, 6Þ, ð4, 2Þ, ð4, 4Þ, ð4, 6Þ, ð6, 2Þ, ð6, 4Þ, ð6, 6Þg

with 9 sample points. Thus, P[D] ¼ 9/36 ¼ 1/4.


Note that the problem can also be solved by considering a two-dimensional display of the state
space, as shown in Figure 1.1. The figure shows the different events just defined. The sample
points in event D are spread over the entire sample space. Therefore, the event D is not shown
in Figure 1.1.
6 C HA PT E R 1 Basic Probability Concepts

C
(1,6) (2,6) (3,6) (4,6) (5,6) (6,6)

(6,5)
A2
(1,5) (2,5) (3,5) (4,5) (5,5)

(1,4) (2,4) (3,4) (4,4) (5,4) (6,4)

(1,3) (2,3) (3,3) (4,3) (5,3) (6,3)

(1,2) (2,2) (3,2) (4,2) (5,2) (6,2)

(1,1) (2,1) (3,1) (4,1) (5,1) (6,1)


A1

FIGURE 1.1
Sample Space for Example 1.1

1.4 APPLICATIONS OF PROBABILITY


There are several science and engineering applications of probability. Some of
these applications are as follows:

1.4.1 Information Theory


Information theory is a branch of mathematics that is concerned with two fun-
damental questions in communication theory: (a) how we can define and
quantify information, and (b) the maximum information that can be sent
through a communication channel. It has made fundamental contributions
not only in communication systems but also in thermodynamics, computer sci-
ence and statistical inference. A communication system is modeled by a source
of messages, which may be a person or machine producing the message to be
transmitted; a channel, which is the medium over which the message is trans-
mitted; and a sink or destination that absorbs the message. While on the chan-
nel the message can be corrupted by noise. A model of a communication system
is illustrated in Figure 1.2.
Noise

Message
Source Channel Sink

FIGURE 1.2
Model of a Communication System
1.4 Applications of Probability 7

The message generated by the source conveys some information. One of the
objectives of information theory is to quantify the information content of mes-
sages. This quantitative measure enables communication system designers to
provision the channel that can support the message. A good measure of the
information content of a message is the probability of occurrence of the mes-
sage: The higher the probability of occurrence of a message, the less informa-
tion it conveys; and the smaller the probability of occurrence of a message, the
greater its information content. For example, the message “the temperature is
90o F in the northeastern part of the United States in December” has more
information content than the message “the temperature is 10o F in the north-
eastern part of the United States in December” because the second message is
more likely to occur than the first.
Thus, information theory uses the probability of occurrence of events to convey
information about those events. Specifically, let P[A] denote the probability of
occurrence of some event A. Then the information content of A, I(A), is given by
 
1
IðAÞ ¼ log 2 ¼ log 2 ð1Þ  log 2 ðP½AÞ ¼ 0  log 2 ðP½AÞ ¼  log 2 ðP½AÞ
P ½ A

From the preceding equation we observe that the greater the probability that an
event will occur, the smaller the information content of the event, as we stated
earlier. If event A is certain to occur, then P[A] ¼ 1 and I(A) ¼  log2(1) ¼ 0. Sim-
ilarly, the smaller the probability that an event will occur, the greater is the
information content of the event. In particular, if event A is certain not to occur,
P[A] ¼ 0 and I(A) ¼  log2(0) ¼ 1. Thus, when an event that is not expected to
occur does actually occur, its information content is infinite.

1.4.2 Reliability Engineering


Reliability theory is concerned with the duration of the useful life of compo-
nents and systems of components. System failure times are unpredictable.
Thus, the time until a system fails, which is referred to as the time to failure
of the system, is usually modeled by a probabilistic function. Reliability appli-
cations of probability are considered later in this chapter.

1.4.3 Quality Control


Quality control deals with the inspection of finished products to ensure that
they meet the desired requirements and specifications. One way to perform
the quality control function is to physically test/inspect each product as it
comes off the production line. However, this is a very costly way to do it.
The practical method is to randomly select a sample of the product from a
lot and test each item in the sample. A decision to declare the lot good or
defective is thus based on the outcome of the test of the items of the sample.
This decision is itself based on a well-designed policy that guarantees that a
good lot is rejected with a very small probability and that a bad lot is accepted
8 C HA PT E R 1 Basic Probability Concepts

with a very small probability. A lot is considered good if the parameter


that characterizes the quality of the sample has a value that exceeds a prede-
fined threshold value. Similarly the lot is considered to be defective if the
parameter that characterizes the quality of the sample has a value that is smal-
ler than the predefined threshold value. For example, one rule for acceptance
of a lot can be that the number of defective items in the selected sample be less
than some predefined fraction of the sample; otherwise the lot is declared
defective.

1.4.4 Channel Noise


Noise is an unwanted signal. In Figure 1.1, the message transmitted from the
source passes through a channel where it is subject to different kinds of ran-
dom disturbances that can introduce errors in the message received at the
sink. That is, channel noise corrupts messages. Thus, in modeling a commu-
nication system, the effect of noise must be taken into consideration. Since
channel noise is a random phenomenon, one of the performance issues is
the probability that a received message is not corrupted by noise. Thus, prob-
ability plays an important role in evaluating the performance of noisy com-
munication channels.

1.4.5 System Simulation


Sometimes it is difficult to provide an exact solution of physical problems
involving random phenomena. The difficulty arises from the fact that such
problems are very complex, which is the case, for example, when a system
has unusual properties. One way to deal with these problems is to provide
an approximate solution, which attempts to make simplifying assumptions
that enable the problem to be solved analytically. Another method is to use
computer simulation, which imitates the physical process. Even when an
approximate solution is obtained, it is always advisable to use simulation to
validate the assumptions.
A simulation model describes the operation of a system in terms of individual
events of the individual elements in the system. The model includes the inter-
relationships among the different elements and allows the effects of the ele-
ments’ actions on each other to be captured as a dynamic process.
The key to a simulation model is the generation of random numbers that can be
used to represent events, such as arrival of customers at a bank, in the system
being modeled. Because these events are random in nature, the random num-
bers are used to drive the probability distributions that characterize them. Thus,
knowledge of probability theory is essential for a meaningful simulation
analysis.
1.5 Elementary Set Theory 9

1.5 ELEMENTARY SET THEORY


A set is a collection of objects known as elements. The events that we discussed
earlier in this chapter are usually modeled as sets, and the algebra of sets is used
to study events. A set can be represented in a number of ways as the following
examples illustrate.
Let A denote the set of positive integers between and including 1 and 5. Then,
A ¼ faj1  a  5g ¼ f1, 2, 3, 4, 5g

Similarly, let B denote the set of positive odd numbers less than 10. Then
B ¼ f1, 3, 5, 7, 9g

If k is an element of the set E, we say that k belongs to (or is a member of) E and
write k 2 E. If k is not an element of the set E, we say that k does not belong to (or
is not a member of) E and write k 62 E.
A set A is called a subset of set B, denoted by A  B, if every member of A is a
member of B. Alternatively, we say that the set B contains the set A by writing
B  A.
The set that contains all possible elements is called the universal set Ω.
The set that contains no elements (or is empty) is called the null set Ø (or
empty set). That is,
Ø ¼ fg

1.5.1 Set Operations


Equality: Two sets A and B are defined to be equal, denoted by A ¼ B, if and only
if (iff) A is a subset of B and B is a subset of A; that is, A  B and B  A.
Complementation: Let A  Ω. The complement of A, denoted by A, is the set
containing all elements of Ω that are not in A. That is,

A ¼ fkjk 2 Ω and k62Ag

EXAMPLE 1.2
Let Ω ¼ {1, 2, 3, 4, 5, 6, 7, 8, 9, 10}, A ¼ {1, 2, 4, 7} and B ¼ {1, 3, 4, 6}. Then A ¼ f3, 5, 6, 8, 9, 10g and
B ¼ f2, 5, 7, 8, 9, 10g:

Union: The union of two sets A and B, denoted by A [ B, is the set containing all
the elements of either A or B or both A and B. That is,
10 C HA PT E R 1 Basic Probability Concepts

A [ B ¼ fkjk 2 A or k 2 Bg

In example 1.2, A [ B ¼ {1, 2, 3, 4, 6, 7}. Note that if an element is a member of


both sets A and B, it is listed only once in A [ B.
Intersection: The intersection of two sets A and B, denoted by A \ B, is the set
containing all the elements that are in both A and B. That is,
A \ B ¼ fkjk 2 A and k 2 Bg

In Example 1.2, A \ B ¼ {1, 4}.


Difference: The difference of two sets A and B, denoted by A  B (also A\B), is
the set containing all elements of A that are not in B. That is,
A  B ¼ fkjk 2 A and k62Bg

Note that A  B 6¼ B  A. From Example 1.2 we find that A  B ¼ {2, 7} while


B  A ¼ {3, 6}. A  B contains the elements of set A that are not in set B while
B  A contains the elements of set B that are not in set A.
Disjoint Sets: Two sets A and B are called disjoint (or mutually exclusive) sets if
they contain no elements in common, which means that A \ B ¼ Ø.

1.5.2 Number of Subsets of a Set


Let a set A contain n elements labeled a1, a2, . . ., an. The number of possible sub-
sets of A is 2n, which can be obtained as follows for the case of n ¼ 3. The eight
subsets are given by fa1 , a2 , a3 g ¼ Ø, fa1 , a2 , a3 g, fa1 , a2 , a3 g, fa1 , a2 , a3 g,
fa1 , a2 , a3 g, fa1 , a2 , a3 g, fa1 , a2 , a3 g, and {a1, a2, a3} ¼ A; where ak indicates that
the element ak is not included. By convention, if ak is not an element of a subset,
its complement is not explicitly included in the subset. Thus, the subsets are ø,
{a1}, {a2}, {a3}, {a1, a2}, {a1, a3}, {a2, a3}, {a1, a2, a3} ¼ A. Since the number of
subsets includes the null set, the number of subsets that contain at least one
element is 2n  1. The result can be extended to the case of n > 3.
The set of all subsets of a set A is called the power set of A and is denoted by s(A).
Thus, for the set A ¼ {a1, a2, a3}, the power set of A is given by
sðAÞ ¼ fø, fa1 g, fa2 g, fa3 g, fa1 , a2 g, fa1 , a3 g, fa2 , a3 g, fa1 , a2 , a3 gg

The number of members of a set A is called the cardinality of A and denoted by


|A|. Thus, if the cardinality of the set A is n, then the cardinality of the power set
of A is |s(A)| ¼ 2n.

1.5.3 Venn Diagram


The different set operations discussed in the previous section can be graphically
represented by the Venn diagram. Figure 1.3 illustrates the complementation,
1.5 Elementary Set Theory 11

Ω Ω Ω

B
A B A
A

A A∩B
A∪B

Ω Ω

B B
A A A⊂B
A−B

FIGURE 1.3
Venn Diagrams of Different Set Operations

union, intersection, and difference operations on two sets A and B. The univer-
sal set is represented by the set of points inside a rectangle. The sets A and B are
represented by the sets of points inside oval objects.

1.5.4 Set Identities


The operations of forming unions, intersections and complements of sets obey
certain rules similar to the rules of algebra. These rules include the following:

n Commutative law for unions: A [ B ¼ B [ A, which states that the order of the
union operation on two sets is immaterial.
n Commutative law for intersections: A \ B ¼ B \ A, which states that the order
of the intersection operation on two sets is immaterial.
n Associative law for unions: A [ (B [ C) ¼ (A [ B) [ C, which states that in
performing the union operation on three sets, we can proceed in two
ways: We can first perform the union operation on the first two sets to
obtain an intermediate result and then perform the operation on the
result and the third set. The same result is obtained if we first perform the
operation on the last two sets and then perform the operation on the first
set and the result obtained from the operation on the last two sets.
n Associative law for intersections: A \ (B \ C) ¼ (A \ B) \ C, which states
that in performing the intersection operation on three sets, we can
proceed in two ways: We can first perform the intersection operation on
the first two sets to obtain an intermediate result and then perform
the operation on the result and the third set. The same result is obtained
if we first perform the operation on the last two sets and then perform
the operation on the first set and the result obtained from the
operation on the last two sets.
n First distributive law: A \ (B [ C) ¼ (A \ B) [ (A \ C), which states that
the intersection of a set A and the union of two sets B and C is equal to
12 C HA PT E R 1 Basic Probability Concepts

the union of the intersection of A and B and the intersection of A and C.


This law can be extended as follows:
 
n n
A\ [ Bi ¼ [ ðA \ Bi Þ
i¼1 i¼1

n Second distributive law: A [ (B \ C) ¼ (A [ B) \ (A [ C), which states that the


union of a set A and the intersection of two sets B and C is equal to the
intersection of the union of A and B and the union of A and C. The law
can also be extended as follows:
 
n n
A[ \ Bi ¼ \ ðA [ Bi Þ (1.9)
i¼1 i¼1

n De Morgan’s first law: A [ B ¼ A \ B, which states that the complement of


the union of two sets is equal to the intersection of the complements of
the sets. The law can be extended to include more than two sets as follows:
n n
[ Ai ¼ \ Ai (1.10)
i¼1 i¼1

n De Morgan’s second law: A \ B ¼ A [ B, which states that the complement


of the intersection of two sets is equal to the union of the complements
of the sets. The law can also be extended to include more than two sets
as follows:
n n
\ Ai ¼ [ Ai (1.11)
i¼1 i¼1

n Other identities include the following:


n A  B ¼ A \ B, which states that the difference of A and B is equal to the

intersection of A and the complement of B.


n A [ Ω ¼ Ω, which states that the union of A and the universal set Ω is

equal to Ω.
n A \ Ω ¼ A, which states that the intersection of A and the universal set Ω

is equal to A.
n A [ Ø ¼ A, which states that the union of A and the null set is equal to A.

n A \ Ø ¼ Ø, which states that the intersection of A and the null set is

equal to the null set.


n Ω ¼ Ø, which states that the complement of the universal set is equal to

the null set.  


n For any two sets A and B, A ¼ ðA \ BÞ [ A \ B , which states that the set

A is equal to the union of the intersection of A and B and the


intersection of A and the complement of B.
The way to prove these identities is to show that any point contained in the
event on the left side of the equality is also contained in the event on the right
side, and vice versa.
1.6 Properties of Probability 13

1.5.5 Duality Principle


The duality principle states that any true result involving sets is also true when
we replace unions by intersections, intersections by unions, sets by their com-
plements, and if we reverse the inclusion symbols  and . For example, if we
replace the union in the first distributive law with intersection and intersection
with union, we obtain the second distributive law, and vice versa. The same
result holds for the two De Morgan’s laws.

1.6 PROPERTIES OF PROBABILITY


We now combine the results of set identities with those of the axiomatic defi-
nition of probability. (See Section 1.3.1.) From these two sections we obtain the
following results:
 
1. P A ¼ 1  P ½A, which states that the probability of the complement of A
is one minus the probability of A.
2. P[Ø] ¼ 0, which states that the impossible (or null) event has
probability zero.
3. If A  B, then P[A]  P[B]. That is, if A is a subset of B, the probability of A
is at most the probability of B (or the probability of A cannot exceed the
probability of B).
4. P[A]  1, which means that the probability of event A is at most 1. This
follows from the fact that A  Ω, and since P[Ω] ¼ 1 the previous
result holds.
5. If A ¼ A1 [ A2 [    [ An, where A1, A2, . . ., An are mutually exclusive
events, then
P½A ¼ P½A1  + P ½A2  +   + P½An 
 
6. For any two events A and B, P ½A ¼ P½A \ B + P A \ B , which follows from
the set identity: A ¼ ðA \ BÞ [ A \ B : Since A \ B and A \ B are mutually
exclusive events, the result follows.
7. For any two events A and B, P[A [ B] ¼ P[A] + P[B]  P[A \ B]. This result
can be proved by making use of the Venn diagram. Figure 1.4a represents

A B A B

I II III

(a) (b)
FIGURE 1.4
Venn Diagram of A [ B
14 C HA PT E R 1 Basic Probability Concepts

a Venn diagram in which the left circle represents event A and the right
circle represents event B. In Figure 1.4b we divide the diagram into three
mutually exclusive sections labeled I, II, and III where section I represents
all points in A that are not in B, section II represents all points in both
A and B, and section III represents all points in B that are not in A.
From Figure 1.4b, we observe that:
A [ B ¼ I [ II [ III
A ¼ I [ II
B ¼ II [ III

Since I, II and III are mutually exclusive, Property 5 implies that


P½A [ B ¼ P½I + P½II + P½III
P½A ¼ P½I + P½II ) P½I ¼ P½A  P½II
P ½B ¼ P½II + P½III ) P½III ¼ P½B  P½II

Thus, we have that


P ½A [ B ¼ P ½A  P½II + P½II + P½B  P½II
(1.12)
¼ P½A + P ½B  P½II ¼ P½A + P½B  P½A \ B

8. We can extend Property 7 to the case of three events. If A1, A2 and A3 are
three events in Ω, then 3
P½A1 [ A2 [ A3  ¼ P½A1  + P ½A2  + P½A3   P½A1 \ A2   P ½A1 \ A3 
(1.13)
P½A2 \ A3  + P½A1 \ A2 \ A3 

This can be further generalized to the case of n arbitrary events in Ω as follows:


X
n X  
P½A1 [ A2 [   [ An  ¼ P ½A i   P Ai \ Aj
i¼1 1ijn
X   (1.14)
+ P Ai \ Aj \ Ak   
1ijkn

That is, to find the probability that at least one of the n events occurs, first add
the probability of each event, then subtract the probabilities of all possible two-
way intersections, then add the probabilities of all possible three-way intersec-
tions, and so on.

1.7 CONDITIONAL PROBABILITY


Consider the following experiment. We are interested in the sum of the num-
bers that appear when two dice are tossed. Suppose we are interested in the
1.7 Conditional Probability 15

event that the sum of the two tosses is 7 and we observe that the first toss is 4.
Based on this fact, the six possible and equally likely outcomes of the two tosses
are (4, 1), (4, 2), (4, 3), (4, 4), (4, 5) and (4, 6). In the absence of the information
that the first toss is 4, there would have been 36 sample points in the sample
space. But with the information on the outcome of the first toss, there are now
only 6 sample points.
Let A denote the event that the sum of the two dice is 7, and let B denote the
event that the first die is 4. The conditional probability of event A given event B,
denoted by P[A|B], is defined by
P ½A \ B
P½AjB ¼ P½B 6¼ 0 (1.15)
P ½ B

Thus, for the preceding problem we have that


P ½ð4, 3Þ
P ½AjB ¼
P½ð4, 1Þ + P½ð4, 2Þ + P½ð4, 3Þ + P½ð4, 4Þ + P½ð4, 5Þ + P½ð4, 6Þ
ð1=36Þ 1
¼ ¼
ð6=36Þ 6

EXAMPLE 1.3
A bag contains 8 red balls, 4 green, and 8 yellow balls. A ball is drawn at random from the bag and
it is found not to be one of the red balls. What is the probability that it is a green ball?

Solution:
Let G denote the event that the selected ball is a green ball and let R denote the event that it is not a
 
red ball. Then, P[G] ¼ 4/20 ¼ 1/5 since there are 4 green balls out of a total of 20 balls, and P R ¼
12=20 ¼ 3=5 since there are 12 balls out of 20 that are not red. Now,
 
  P G\R
P GjR ¼  
P R

But if the ball is green and not red, it must be green. Thus, we have that G \ R ¼ fGg because G is
a subset of R: Thus,
 
  P G\R P½G 1=5 1
P GjR ¼   ¼  ¼ ¼
P R P R 3=5 3

EXAMPLE 1.4
A fair coin was tossed two times. Given that the first toss resulted in heads, what is the probability
that both tosses resulted in heads?
16 C HA PT E R 1 Basic Probability Concepts

Solution:
Because the coin is fair, the four sample points of the sample space Ω ¼ {HH, HT, TH, TT} are
equally likely. Let X denote the event that both tosses came up heads; that is, X ¼ {HH}. Let Y denote
the event that the first toss came up heads; that is, Y ¼ {HH, HT}. Because X is a subset of Y,
the probability that both tosses resulted in heads, given that the first toss resulted in heads, is
given by

P½X \ Y  P½X  1=4 1


P½XjY  ¼ ¼ ¼ ¼
P ½Y  P½Y  2=4 2

1.7.1 Total Probability and the Bayes’ Theorem


A partition of a set A is a set {A1, A2, . . ., An} with the following properties:

a. Ai  A, i ¼ 1, 2,   , n, which means that A is a set of subsets.


b. Ai \ Ak ¼ ø, i ¼ 1, 2,   , n; k ¼ 1, 2,   , n; i 6¼ k, which means that the subsets
are mutually (or pairwise) disjoint; that is, no two subsets have any
element in common.
c. A1 [ A2 [ . . . [ An ¼ A, which means that the subsets are collectively
exhaustive. That is, the subsets together include all possible values of
the set A.

Proposition 1.1
Let {A1, A2, . . ., An} be a partition of the sample space Ω, and suppose each one of the events A1,
A2, . . ., An has nonzero probability of occurrence. Let B be any event defined in Ω. Then

P½B ¼ P½BjA1 P½A1  + P½BjA2 P½A2  +  + P½BjAn P½An 


Xn
¼ P½BjAi P½Ai 
i¼1

Proof
The proof is based on the observation that because {A1, A2, . . ., An} is a partition of Ω, the set {B \ A1,
B \ A2, . . ., B \ An} is a partition of the event B because if B occurs, then it must occur in conjunction
with one of the Ai ’ s. Thus, we can express B as the union of n mutually exclusive events. That is,

B ¼ ðB \ A1 Þ [ ðB \ A2 Þ [ ... [ ðB \ An Þ

Since these events are mutually exclusive, we obtain

P½B ¼ P½B \ A1  + P½B \ A2  + ... + P½B \ An 

From our definition of conditional probability, P[B \ Ai] ¼ P[B|Ai]P[Ai], which exists because we
assumed in the Proposition that the events A1, A2, . . ., An have nonzero probabilities. Substituting
the definition of conditional probabilities we obtain the desired result:

P½B ¼ P½BjA1 P½A1  + P½BjA2 P½A2  + ... + P½BjAn P½An 

The above result is defined as the law of total probability of event B, which will be useful in the
remainder of the book.
1.7 Conditional Probability 17

EXAMPLE 1.5
A student buys 1000 chips from supplier A, 2000 chips from supplier B, and 3000 chips from sup-
plier C. He tested the chips and found that the probability that a chip is defective depends on the
supplier from where it was bought. Specifically, given that a chip came from supplier A, the prob-
ability that it is defective is 0.05; given that a chip came from supplier B, the probability that it is
defective is 0.10; and given that a chip came from supplier C, the probability that it is defective is
0.10. If the chips from the three suppliers are mixed together and one of them is selected at ran-
dom, what is the probability that it is defective?

Solution:
Let P[A], P[B] and P[C] denote the probability that a randomly selected chip came from supplier A,
B and C respectively. Also, let P[D|A] denote the conditional probability that a chip is defective,
given that it came from supplier A; let P[D|B] denote the conditional probability that a chip is defec-
tive, given that it came from supplier B; and let P[D|C] denote the conditional probability that a chip
is defective, given that it came from supplier C. Then the following are true:

P½DjA ¼ 0:05
P½DjB ¼ 0:10

P½DjC ¼ 0:10
1000 1
P½A ¼ ¼
1000 + 2000 + 3000 6
2000 1
P½B ¼ ¼
1000 + 2000 + 3000 3
3000 1
P½C ¼ ¼
1000 + 2000 + 3000 2
Let P[D] denote the unconditional probability that a randomly selected chip is defective. Then,
from the law of total probability of D we have that

P½D ¼ P½DjAP½A + P½DjBP½B + P½DjCP½C


¼ ð0:05Þð1=6Þ + ð0:10Þð1=3Þ + ð0:10Þð1=2Þ
¼ 0:09167

We now go back to the general discussion. Suppose event B has occurred but we
do not know which of the mutually exclusive and collectively exhaustive events
A1, A2, . . ., An holds true. The conditional probability that event Ak occurred,
given that B occurred, is given by

P ½ A k \ B P ½Ak \ B
P½Ak jB ¼ ¼ n
P ½ B X
P½BjAi P ½Ai 
i¼1

where the second equality follows from the law of total probability of event B.
Since P[Ak \ B] ¼ P[B|Ak]P[Ak], the above equation can be rewritten as follows:
18 C HA PT E R 1 Basic Probability Concepts

P ½Ak \ B P½BjAk P ½Ak 


P½Ak jB ¼ ¼ n (1.16)
P ½B X
P½BjAi P ½Ai 
i¼1

This result is called the Bayes’ formula (or Bayes’ rule).

EXAMPLE 1.6
In Example 1.5, given that a randomly selected chip is defective, what is the probability that it came
from supplier A?

Solution:
Using the same notation as in Example 1.5, the probability that the randomly selected chip came
from supplier A, given that it is defective, is given by

P½D \ A P½DjAP½A
P½AjD ¼ ¼
P½D P½DjAP½A + P½DjBP½B + P½DjCP½C
ð0:05Þð1=6Þ
¼
ð0:05Þð1=5Þ + ð0:10Þð1=3Þ + ð0:10Þð1=2Þ
¼ 0:0909

EXAMPLE 1.7
(The Binary Symmetric Channel) A discrete channel is characterized by an input alphabet
X ¼ {x1, x2, . . ., xn}; an output alphabet Y ¼ {y1, y2, . . ., yn}; and a set of conditional probabilities (called
transition probabilities), Pij, which are defined as follows:
h i h i
Pij ¼ P y j jxi ¼ P receiving symbol y j ,given that symbol xi was transmitted

i ¼ 1, 2, . . ., n; j ¼ 1, 2, . . ., m. The binary channel is a special case of the discrete channel where


n ¼ m ¼ 2. It can be represented as shown in Figure 1.5.

P11 = P [ y1 | x1]
x1 y1
P12 = P [ y2 | x1]

P21 = P [ y1 | x2]
x2 y2
P22 = P [ y2 | x2]

FIGURE 1.5
The Binary Channel
1.7 Conditional Probability 19

P11 = P [ y1 | x1] = 0.9


P[x1] = 0.6 x1 y1
P12 = P [ y2 | x1] = 0.1

P21 = P [ y1 | x2] = 0.1


P[x2] = 0.4 x2 y2
P22 = P [ y2 | x2] = 0.9

FIGURE 1.6
The Binary Symmetric Channel for Example 1.7

In the binary channel, an error occurs if y2 is received when x1 is transmitted or y1 is received when
x2 is transmitted. Thus, the probability of error, Pe, is given by:

Pe ¼ P½x1 \ y 2  + P½x2 \ y 1  ¼ P ½y 2 jx1 P½x1  + P½y 1 jx2 P½x2 


(1.17)
¼ P½x1 P12 + P½x2 P21

If P12 ¼ P21, we say that the channel is a binary symmetrical channel (BSC). Also, if in the BSC
P[x1] ¼ p, then P[x2] ¼ 1  p ¼ q.

Consider the BSC shown in Figure 1.6, with P[x1] ¼ 0.6 and P[x2] ¼ 0.4. Compute the following:

a. The probability that x1 was transmitted, given that y2 was received


b. The probability that x2 was transmitted, given that y1 was received
c. The probability that x1 was transmitted, given that y1 was received
d. The probability that x2 was transmitted, given that y2 was received
e. The unconditional probability of error

Solution:
Let P[y1] denote the probability that y1 was received and P[y2] the probability that y2 was received.
Then

a. The probability that x1 was transmitted, given that y2 was received is given by

P½x1 \ y 2  P½y 2 jx1 P½x1 


P½x1 jy 2  ¼ ¼
P ½y 2  P½y 2 jx1 P½x1  + P½y 2 jx2 P½x2 

ð0:1Þð0:6Þ
¼
ð0:1Þð0:6Þ + ð0:9Þð0:4Þ

¼ 0:143

b. The probability that x2 was transmitted, given that y1 was received is given by

P½x2 \ y 1  P½y 1 jx2 P½x2 


P½x2 jy 1  ¼ ¼
P ½y 1  P½y 1 jx1 P½x1  + P½y 1 jx2 P½x2 

ð0:1Þð0:4Þ
¼
ð0:9Þð0:6Þ + ð0:1Þð0:4Þ

¼ 0:069
20 C HA PT E R 1 Basic Probability Concepts

c. The probability that x1 was transmitted, given that y1 was received is given by
P½x1 \ y 1  P½y 1 jx1 P½x1 
P½x1 jy 1  ¼ ¼
P½y 1  P ½y 1 jx1 P½x1  + P½y 1 jx2 P½x2 

ð0:9Þð0:6Þ
¼
ð0:9Þð0:6Þ + ð0:1Þð0:4Þ

¼ 0:931 ¼ 1  P½x2 jy 1 
d. The probability that x2 was transmitted, given that y2 was received is given by

P½x2 \ y 2  P½y 2 jx2 P½x2 


P½x2 jy 2  ¼ ¼
P½y 2  P ½y 2 jx1 P½x1  + P½y 2 jx2 P½x2 

ð0:9Þð0:4Þ
¼
ð0:1Þð0:6Þ + ð0:9Þð0:4Þ

¼ 0:857 ¼ 1  P½x1 jy 2 

e. The unconditional probability of error is given by

Pe ¼ P½x1 P½y 2 jx1  + P½x2 P½y 1 jx2  ¼ P½x1 P12 + P½x2 P21 ¼ ð0:6Þð0:1Þ + ð0:4Þð0:1Þ
¼ 0:1

EXAMPLE 1.8
The quarterback for a certain football team has a good game with probability 0.6 and a bad
game with probability 0.4. When he has a good game, he throws an interception with a probability
of 0.2; and when he has a bad game, he throws an interception with a probability of 0.5. Given that he
threw an interception in a particular game, what is the probability that he had a good game?

Solution:
Let G denote the event that the quarterback has a good game and B the event that he
has a bad game. Similarly, let I denote the event that he throws an interception. Then we have that

P½G ¼ 0:6

P½B ¼ 0:4

P½IjG ¼ 0:2

P½IjB ¼ 0:5

P ½G \ I 
P½GjI ¼
P½I

According to the Bayes’ formula, the last equation becomes

P ½G \ I  P½IjGP½G
P½GjI ¼ ¼
P½I P½IjGP½G + P½IjBP½B

ð0:2Þð0:6Þ 0:12
¼ ¼
ð0:2Þð0:6Þ + ð0:5Þð0:4Þ 0:32

¼ 0:375
1.7 Conditional Probability 21

EXAMPLE 1.9
Two events A and B are such that P[A \ B] ¼ 0.15, P[A [ B] ¼ 0.65, and P[A|B] ¼ 0.5. Find P[B|A].

Solution:
P[A [ B] ¼ P[A] + P[B]  P[A \ B] ) 0.65 ¼ P[A] + P[B]  0.15. This means that P[A] + P[B] ¼ 0.65
+ 0.15 ¼ 0.80. Also, P[A \ B] ¼ P[A|B]P[B]. This then means that

P½A \ B 0:15
P½B ¼ ¼ ¼ 0:30
P½AjB 0:50

Thus, P[A] ¼ 0.80  0.30 ¼ 0.50. Since P[A \ B] ¼ P[B|A]P[A], we have that

P½A \ B 0:15
P½BjA ¼ ¼ ¼ 0:30
P ½A  0:50

EXAMPLE 1.10
A student went to the post office to send a priority mail to his parents. He gave the postal lady a bill
he believed was $20. However, the postal lady gave him change based on her belief that she
received a $10 bill from the student. The student started to dispute the change. Both the student
and the postal lady are honest but may make mistakes. If the postal lady’s drawer contains thirty
$20 bills and twenty $10 bills, and the postal lady correctly identifies bills 90% of the time, what is
the probability that the student’s claim is valid?

Solution:
Let A denote the event that the student gave a $10 bill and B the event that the student gave a $20
bill. Let V denote the event that the student’s claim is valid. Finally, let L denote the event that the
postal lady said that the student gave her a $10 bill. Since there are 30 $20 bills and 20 $10 bills
in the drawer, the probability that the money the student gave the postal lady was a $20 bill is
30/(20 + 30) ¼ 0.6, and the probability that it was a $10 bill is 1  0.6 ¼ 0.4. Thus,

P½L ¼ P½LjAP½A + P½LjBP½B ¼ ð0:90Þð0:4Þ + ð0:1Þð0:6Þ


¼ 0:42

Therefore, the probability that the student’s claim is valid is the probability that he gave a $20 bill,
given that the postal lady said that the student gave her a $10 bill. Using Bayes’ formula we obtain

P½V \ L P½LjV P½V  ð0:10Þð0:60Þ 1


P½VjL ¼ ¼ ¼ ¼ ¼ 0:1429
P½L P½L 0:42 7

EXAMPLE 1.11
An aircraft maintenance company bought an equipment for detecting structural defects in air-
crafts. Tests indicate that 95% of the time the equipment detects defects when they actually exist,
and 1% of the time it gives a false alarm (that is, it indicates the presence of a structural defect
when in fact there is none). If 2% of the aircrafts actually have structural defects, what is the
22 C HA PT E R 1 Basic Probability Concepts

probability that an aircraft actually has a structural defect given that the equipment indicates that
it has a structural defect?

Solution:
Let D denote the event that an aircraft has a structural defect and B the event that the test indi-
cates that there is a structural defect. Then we are required to find P[D|B]. Using Bayes’ formula
we obtain

P½D \ B P½BjDP½D
P½DjB ¼ ¼    
P½B P½BjDP½D + P BjD P D
ð0:95Þð0:02Þ
¼ ¼ 0:660
ð0:95Þð0:2Þ + ð0:01Þð0:98Þ

Thus, only 66% of the aircrafts that the equipment diagnoses as having structural defects actually
have structural defects.

1.7.2 Tree Diagram


Conditional probabilities are used to model experiments that take place in
stages. The outcomes of such experiments are conveniently represented by
a tree diagram. A tree is a connected graph that contains no circuit (or loop).
Every two nodes in the tree have a unique path connecting them. Line seg-
ments called branches or edges interconnect the nodes. Each branch may split
into other branches or it may terminate. When used to model an experiment,
the nodes of the tree represent events of the experiment. The number of
branches that emanate from a node represents the number of events that
can occur, given that the event represented by that node occurs. The node that
has no predecessor is called the root of the tree, and any node that has no suc-
cessor or child is called a leaf of the tree. The events of interest are usually
defined at the leaves by tracing the outcomes of the experiment from the root
to each leaf.
The conditional probabilities appear on the branches leading from the node
representing an event to the nodes representing the next events of the experi-
ment. A path through the tree corresponds to a possible outcome of the exper-
iment. Thus, the product of all the branch probabilities from the root of the tree
to any node is equal to the probability of the event represented by that node.
Consider an experiment that consists of three tosses of a coin. Let p denote the
probability of heads in a toss; then 1  p is the probability of tails in a toss.
Figure 1.7 is the tree diagram for the experiment.
Let A be the event “the first toss came up heads” and let B be the event
“the second toss came up tails.” Thus, A ¼ {HHH, HHT, HTH, HTT} and
B ¼ {HTH, HTT, TTH, TTT}. From Figure 1.7, we have that
1.7 Conditional Probability 23

First Toss Second Toss Third Toss Outcomes

p H HHH

H
p 1− p T HHT

H
p H HTH
1−p
T
p
1− p T HTT

Root

p H THH
1−p H
p
1− p T THT
T
p H TTH
1− p
T

1− p T TTT

FIGURE 1.7
Tree Diagram for Three Tosses of a Coin

P ½A ¼ P½HHH + P½HHT  + P ½HTH + P½HTT 


¼ p3 + 2p2 ð1  pÞ + pð1  pÞ2
¼p
P ½B ¼ P½HTH + P½HTT  + P½TTH + P½TTT 
¼ p2 ð1  pÞ + 2pð1  pÞ2 + ð1  pÞ3
¼1p

Since A \ B ¼ {HTH, HTT}, we have that

P½A \ B ¼ P½HTH + P½HTT  ¼ p2 ð1  pÞ + pð1  pÞ2 ¼ pð1  pÞfp + 1  pg ¼ pð1  pÞ

Now, A [ B ¼ {HHH, HHT, HTH, HTT, TTH, TTT} and we have that
P ½A [ B ¼ P½HHH + P ½HHT  + P½HTH + P½HTT  + P½TTH + P½TTT 
¼ p3 + 2p2 ð1  pÞ + 2pð1  pÞ2 + ð1  pÞ3
¼ p2 fp + 2ð1  pÞg + ð1  pÞ2 f2p + 1  pg ¼ 1  p + p2
¼ 1  pð1  pÞ
24 C HA PT E R 1 Basic Probability Concepts

Note that we can obtain the same result as follows:


P½A [ B ¼ P½A + P½B  P½A \ B ¼ p + 1  p  pð1  pÞ
¼ 1  pð1  pÞ

EXAMPLE 1.12
A university has twice as many undergraduate students as graduate students. 25% of the graduate
students live on campus and 10% of the undergraduate students live on campus.

a. If a student is chosen at random from the student population, what is the probability that
the student is an undergraduate student living on campus?
b. If a student living on campus is chosen at random, what is the probability that the student
is a graduate student?

Solution:
We use the tree diagram to solve the problem. Since there are twice as many undergraduate stu-
dents as there are graduate students, the proportion of undergraduate students in the population
is 2/3, and the proportion of graduate students is 1/3. These as well as the other data are shown as
the labels on the branches of the tree in Figure 1.8. In the figure G denotes graduate student, U
denotes undergraduate student, ON denotes living on campus, and OFF denotes living off campus.

a. From the figure we see that the probability that a randomly selected student is an undergraduate
student living on campus is 0.067. We can also solve the problem directly as follows. We are
required to find the probability of choosing an undergraduate student who lives on campus, which
is P[U! ON], the probability of first going on the U branch and then to the ON branch from there.
That is,

2
P½U ! ON ¼ 0:10 ¼ 0:067
3

ON 0.067 Undergraduate on Campus


0.10

2/3 0.90 OFF 0.060 Undergraduate off Campus

Student Population

ON 0.083 Graduate on Campus


1/3 0.25

0.75
OFF 0.250 Graduate off Campus

FIGURE 1.8
Figure for Example 1.12
1.7 Conditional Probability 25

b. From the tree, the probability that a student lives on campus is P[U ! ON] + P[G ! ON] ¼
0.067 + 0.083 ¼ 0.15. Thus, the probability that a randomly selected student living on cam-
pus is a graduate student is P[G ! ON]/{P[U ! ON] + P[G ! ON]} ¼ 0.083/0.15 ¼ 0.55. Note
that we can also use the Bayes’ theorem to solve the problem as follows:

P½ONjGP½G ð0:25Þð1=3Þ
P½GjON ¼ ¼
P½ONjUP½U + P½ONjGP½G ð0:25Þð1=3Þ + ð0:10Þð2=3Þ
5
¼ ¼ 0:55
9

EXAMPLE 1.13
A multiple-choice exam consists of 4 choices per question. On 75% of the questions, Pat thinks
she knows the answer; and on the other 25% of the questions, she just guesses at random. Unfor-
tunately even when she thinks she knows the answer, Pat is right only 80% of the time.

a. What is the probability that her answer to an arbitrary question is correct?


b. Given that her answer to a question is correct, what is the probability that it was a lucky
guess? (This means that it is among the questions whose answers she guessed at
random.)

Solution:
We can use the tree diagram as follows. There are two branches at the root labeled K for
the event “Pat thinks she knows,” and K for the event “Pat does not know.” Under event K,
 
she is correct (C) with probability 0.80 and not correct C with probability 0.20. Under
event K, she is correct with probability 0.25 because she is equally likely to choose any of the
4 answer; therefore, she is not correct with probability 0.75. The tree diagram is shown in
Figure 1.9.

0.80 C Correct

K
0.75
0.20 C Wrong

0.25 C Correct
0.25
K

0.75
C Wrong

FIGURE 1.9
Figure for Example 1.13
26 C HA PT E R 1 Basic Probability Concepts

a. The probability that Pat’s answer to an arbitrary question is correct is given by


 
P½Correct Answer ¼ P½K ! C + P K ! C ¼ ð0:75Þð0:8Þ + ð0:25Þð0:25Þ
¼ 0:6625

This can also be obtained by the direct method as follows:


   
P½Correct Answer ¼ P½Correct AnswerjK P½K  + P Correct AnswerjK P K
¼ ð0:80Þð0:75Þ + ð0:25Þð0:25Þ ¼ 0:6625

b. Given that she gets a question correct, the probability that it was a lucky guess is given by
  ð0:25Þð0:25Þ
P½Lucky GuessjCorrect Answer ¼ P K ! C =P½Correct Answer ¼
0:6625
¼ 0:0943

We can also use the direct method as follows:


   
P Correct AnswerjK P K ð0:25Þð0:25Þ
P½Lucky GuessjCorrect Answer ¼ ¼
P½Correct Answer 0:6625
¼ 0:0943

1.8 INDEPENDENT EVENTS


Two events A and B are defined to be independent if the knowledge that one has
occurred does not change or affect the probability that the other will occur. In
particular, if events A and B are independent, the conditional probability of
event A, given event B, P[A|B], is equal to the probability of event A. That is,
events A and B are independent if
P ½AjB ¼ P½A (1.18)

Since by definition P[A \ B] ¼ P[A|B]P[B], an alternative definition of indepen-


dence of events is that events A and B are independent if
P½A \ B ¼ P½AP½B (1.19)

The definition of independence can be extended to multiple events. The n


events A1, A2, . . ., An are said to be independent if the following conditions
are true:
   
P Ai \ Aj ¼ P ½Ai P Aj
   
P Ai \ Aj \ Ak ¼ P ½Ai P Aj P½Ak 

...
   
P Ai \ Aj \   \ An ¼ P½Ai P Aj   P½An 
1.8 Independent Events 27

This is true for all 1  i < j < k <     n. That is, these events are pairwise inde-
pendent, independent in triplets, and so on.

EXAMPLE 1.14
A red die and a blue die are rolled together. What is the probability that we obtain 4 on the red die
and 2 on the blue die?

Solution:
Let R denote the event “4 on the red die” and let B denote the event “2 on the blue die.” We are,
therefore, required to find P[R \ B]. Since the outcome of one die does not affect the outcome of the
other die, the events R and B are independent. Thus, since P[R] ¼ 1/6 and P[B] ¼ 1/6, we have that
P[R \ B] ¼ P[R]P[B] ¼ 1/36.

EXAMPLE 1.15
Two coins are tossed. Let A denote the event “at most one head on the two tosses” and let B denote
the event “one head and one tail in both tosses.” Are A and B independent events?

Solution:
The sample space of the experiment is Ω ¼ {HH, HT, TH, TT}. Now, the two events are defined as
follows: A ¼ {HT, TH, TT} and B ¼ {HT, TH}. Also, A \ B ¼ {HT, TH}. Thus,

3
P½A ¼
4
2 1
P½B ¼ ¼
4 2
2 1
P½A \ B ¼ ¼
4 2
3 1 3
P½AP½B ¼ ¼
4 2 8
Since P[A \ B] 6¼ P[A]P[B], we conclude that events A and B are not independent. Note that B is a
subset of A, which confirms that they cannot be independent.

Proposition 1.2
If A and B are independent events, then so are events A and B, events A and B, and events A and B:

Proof
   
Event A can be written as follows: A ¼ ðA \ BÞ [ A \ B : Since the events (A \ B) and A \ B are
mutually exclusive, we may write
 
P½A ¼ P½A \ B + P A \ B
 
¼ P½AP½B + P A \ B
Another random document with
no related content on Scribd:
PLATE CCCCLXXVI.

MELALEUCA DIOSMÆFOLIA.
Diosma-leaved Melaleuca.
CLASS XVIII. ORDER IV.

P O L YA D E L P H I A P O L YA N D R I A . T h r e a d s i n m a n y S e t s .
Many Chives.
ESSENTIAL GENERIC CHARACTER.
Calyx quinquefidus, semisuperus. Petala quinque. Filamenta multa,
longissima, in quinque corpora connata. Pistillum unum. Capsula 3-locularis.
Cup five-cleft, half above. Petals five. Threads numerous, very long,
united into five bodies. Pointal one. Capsule 3-celled.
See Melaleuca Ericæfolia, Pl. 175. Vol. III.
SPECIFIC CHARACTER.
Melaleuca foliis alternatis, ovatis, reflexis, subtus punctatis, odoratis:
floribus sessilibus in medio ramorum, viridibus, confertis: ramis verticillatis,
patentibus.
Melaleuca with alternate leaves, ovate, and reflexed, punctured beneath,
and sweet-scented. Flowers sessile about the middle of the branches, are of a
green colour, and crowded together. The branches are whorled, and
spreading.
REFERENCE TO THE PLATE.
1. A flower complete.
2. A flower spread open, without the empalement.
3. One of the five bundles of chives.
4. Empalement, seed-bud, and pointal, summit magnified.
5. A ripe seed-vessel.
This perfectly new species of Melaleuca was sent to us by Mr. J. Milne,
botanic gardener at Fonthill, who is very successful in the cultivation of new
plants. The punctured or dotted character on the under side of the leaves
gives it an affinity to the Diosma tribe, as does also its scented foliage,
which when rubbed emits a grateful aromatic odour; and which the leaves
retain in some degree when dried. The flowers, although not splendid, are
perhaps equally estimable from the rarity of their colour, which is a bright
green when in perfection; but in retiring they acquire a yellower tint. It is a
native of New Holland, and requires the careful treatment of the green-
house.
PLATE CCCCLXXVII.

LINUM VENUSTUM.
Graceful Linum.
CLASS V. ORDER V.

P E N TA N D R I A P E N TA G Y N I A . F i v e C h i v e s . F i v e P o i n t a l s .
ESSENTIAL GENERIC CHARACTER.
Calyx 5-phyllus. Petala 5-phylla. Capsule 5-valvis, 10-locularis. Semina
solitaria.
Empalement 5-leaved. Petals 5-leaved. Capsule 5-valved. 10
Loculaments. Seeds solitary.
SPECIFIC CHARACTER.
Linum foliis ovatis, acutis, 5-7-nervosis, margine pilosa: floribus in
umbellis paniculatis: ramis alternis: corollis magnis, patentibus, incarnatis.
Caulis erectus, pedalis.
Nascens in Monte Caucaso.
Linum with ovate sharp-pointed leaves. Nerves from 5 to 7, and hairy at
the edges. Flowers grow in paniculated umbels. Branches alternate. Blossom
large, spreading, and flesh-coloured. Branches upright, a foot high.
Native of Mount Caucasus.
This fine new Linum was raised from seed by Mr. J. Bell, in whose
garden near Brentford it has flowered for the first time in England. It is
nearest in affinity to the L. hirsutum of Jacquin, under which specific title
the seed was received by Mr. Bell. The flowers when dead or dried lose their
fine pinky tint, and acquire a blueish colour, the same as it first appears with
in the bud state. It might then compare with Jacquin’s figure in point of
colour, but would be too far removed in its appearance for us to have
adopted the specific of hirsutum with any propriety. We may therefore with
justice regard it as a beautiful nondescript species. It is a native of Mount
Caucasus, flowers in June and July, and seeds so freely that it will no doubt
be soon abundantly cultivated.
PLATE CCCCLXXVIII.

C R I N U M L AT I F O L I U M .
Broad-leaved Crinum.
CLASS VI. ORDER I.

HEXANDRIA MONOGYNIA. Six Chives. One Pointal.


ESSENTIAL GENERIC CHARACTER.
Corolla supra, infundibuliformis, sex-partita, æqualis: filamenta fauci
tubi inserta: semina ad basin corollarum, vivipara.
Blossom above, funnel-shaped, six-parted, equal: threads inserted into
the mouth of the tube: seeds at the base of the blossoms, viviparous.
SPECIFIC CHARACTER.
Crinum latifolium, undulatum, glabrum, lucidum: spatha multiflora: tubo
corollæ laciniis longiore, quæ mucronatæ sunt: post florescentiam capsula
crescit in bulbum magnum, et plantam format futuram.
Habitat in Indiæ orientalis arenosis.
Crinum with broad, waved, smooth, shining leaves: sheath many-
flowered: the tube of the blossom longer than the segments, which are
pointed: and after flowering, the capsule swells into a large bulb, and forms
the future plant.
Native of the sandy parts of the East Indies.
REFERENCE TO THE PLATE.
1. A leaf.
2. The plant in miniature.
3. The chives, as attached to the tube of the flower.
4. Seed-bud and pointal.
5. The seed-bud, or bulb, inflated, as it appears after flowering.
6. The same stripped of its outer covering.
The Crinum latifolium is certainly one of the most attractive of the genus,
as, in addition to the fine red colour of its flowers, it possesses an aromatic
odour of agreeable fragrance. It is a bulb-bearer, as are all of this genus,
although several of them have been placed erroneously amongst the genus
Amaryllis, which does not bear bulbs. The genus Crinum, at present a short
one, will therefore, with a good grace, admit a few additions; whilst that of
Amaryllis, already very extended, will receive no injury by a slight
curtailment. It is a native of the dry sandy parts of the East Indies, and was
introduced by Mr. Lambert in the year 1803, but has not flowered till this
summer—a period of four years. But there is little doubt of its now blooming
annually, as it is not uncommon for bulbs imported from a great distance to
enjoy a state of quiescence after their arrival. Our figure was made from a
fine plant in luxuriant bloom in the hot-stove of J. Vere, esq.
PLATE CCCCLXXIX.

FRAGARIA INDICA.
Indian Strawberry.
CLASS XXII. ORDER V.

I C O S A N D R I A P O L Y G Y N I A . Tw e n t y C h i v e s . M a n y
Pointals.
GENERIC CHARACTER.

Calyx. Perianthium monophyllum, planum, decemfidum: laciniis


alternatim exterioribus, angustioribus.
Corolla. Petala quinque, subrotunda, patentia, calyci inserta.
Stamina. Filamenta viginti, subulata, corolla breviora, calyci inserta.
Antheræ lunulares.
Pistillum. Germina numerosa, minima, in capitulum collecta. Styli
simplices, latere germinis inserti. Stigmata simplicia.
Pericarpium nullum. Bacca fit receptaculum commune seminum,
rotundo-ovata, pulposa, mollis, magna, colorata, basi truncata, decidua.
Semina numerosa, minima, per superficiem receptaculi sparsa.
Empalement. Cup one-leafed, flat, ten-cleft: the segments are alternately
exterior, and narrowed.
Blossom five-petalled, nearly round, spreading, and inserted into the
calyx.
Chives. Threads twenty, awl-shaped, shorter than the blossom, inserted
into the calyx. Tips like a half-moon.
Pointal. Seed-buds numerous and small, collected into a head. Shaft
simple, inserted into the side of the germ. Summit simple.
Seed-vessel none. The berry becomes the common receptacle for the
seeds, is of a round ovate form, pulpy, soft, large, and coloured, cut off at the
base, and deciduous.
Seeds numerous, small, on the outside of the receptacle, scattered.
SPECIFIC CHARACTER.
Fragaria foliis tripartitis: foliolis ovatis, acutis, crenatis: petiolis longis:
calyce decemfido, inferne piloso: quinque exterioribus rotundatis, crenatis:
interioribus ovatis, acutis: pedunculis longis: floribus luteis: fructu rubro,
insipido. Rami pilosi, repentes.
Strawberry with three-divided leaves: leaflets ovate, pointed, and
scolloped: footstalks long: empalement ten-cleft, and hairy beneath: the five
outer ones are rounded and notched: the inner ones are ovate, and pointed:
peduncles long: flowers yellow: fruit red and insipid. Branches hairy, and
creeping.
REFERENCE TO THE PLATE.
1. The empalement, seed-buds, chives, and pointals.
2. The same shown from the under side.
3. A petal.
4. Seed-buds, chives, and pointals.
5. A seed-bud and pointal.
6. The same magnified.
7. A seed-bud from the ripe fruit.
8. The same magnified.
This new species of Fragaria, from the lively yellow flowers and brilliance
of its fine red fruit, is desirable as an ornamental plant, but is in no other
respect estimable, from the insipidity of its fruit, which is entirely destitute
of flavour. It is a native of the north-east parts of Bengal. Our figure was
made from the only plant that has as yet flowered in England, in the gardens
of the Honourable C. Greville.
PLATE CCCCLXXX.

VA C C I N I U M N I T I D U M .
Shining-leaved Whortle-berry.
CLASS VIII. ORDER I.

O C TA N D R I A M O N O G Y N I A . E i g h t C h i v e s . O n e P o i n t a l .
ESSENTIAL GENERIC CHARACTER.
Calyx superus. Corolla monopetala. Filamenta receptaculo inserta. Bacca
quadrilocularis, polysperma.
Cup superior. Blossom of one petal. Threads fixed to the receptacle. A
berry with four cells, and many seeds.
See Vol. I. Pl. XXX. Vaccinium Arctostaphyllus.
SPECIFIC CHARACTER.
Vaccinium foliis nitidis, ovatis, acutis, obsolete serratis: floribus
umbellatis, terminalibus, pendulis: corollis sub-cylindraceis. Stamina decem:
ramis oppositis, alternatis. Caulis pedalis, erectus.
Whortle-berry with shining leaves, egg-shaped, sharp-pointed, and
obscurely sawed. Flowers grow in umbels, terminal and pendulous: blossom
nearly cylindrical: chives ten: branches opposite, and alternate. Stem a foot
high, and upright.
REFERENCE TO THE PLATE.
1. The blossom spread open.
2. The chives spread.
3. A chive magnified.
4. Empalement, seed-bud, and pointal.
The Vaccinium nitidum is one of the handsomest species, but not so often to
be met with as many of the genus. It is nearly allied in its foliage to the V.
crassifolium; for, if leaves of both were detached, and mixt together, they
might be easily mistaken. The habits of the plants are, however, very
distinct, as is also the shape of the flowers. During the month of May and
beginning of June this plant is in the greatest perfection. After that period the
flowers lose much of their fine red colour. Our figure was made from a
beautiful little shrub, above a foot high, in the nursery of Messrs. Whitley
and Brames.
PLATE CCCCLXXXI.

CINCHONA CARIBÆA.
West India Bark-tree.
CLASS V. ORDER II.

P E N TA N D R I A M O N O G Y N I A . F i v e C h i v e s . O n e P o i n t a l .
GENERIC CHARACTER.

Calyx. Perianthium monophyllum, superum, campanulatum, 5-dentatum,


persistens.
Corolla monopetala, infundibuliformis, 5-partita.
Stamina. Filamenta 5, minima: antheræ oblongæ, intra faucem corollæ.
Pistillum. Germen subrotundum, inferum. Stylus longitudine corollæ.
Stigma crassiusculum, oblongum, simplex.
Pericarpium. Capsula oblonga, bipartita, calyce coronata, in duas partes
dehiscens: partes interiores dehiscentes, dissepimento parallelo.
Semina plura, oblonga, compressa, marginata.
Empalement. Cup one-leafed, above, bell-shaped, five-toothed, and
remaining.
Blossom one petal, funnel-shaped, and five-parted.
Chives. Threads 5, small: tips oblong, within the mouth of the blossom.
Pointal. Seed-bud nearly round, beneath. Shaft the length of the corolla.
Summit thickish, oblong, and simple.
Seed-vessel. Capsule oblong, in two parts, crowned by the cup, the two
parts cleaving together; the inner parts gaping, with equal dissepiment.
Seeds many, oblong, compressed, and emarginated.
SPECIFIC CHARACTER.
Cinchona foliis alternatis, ovatis, acuminatis, integerrimis, glabris,
venosis: inter folia stipula parva, cauli adpressa: floribus axillaribus,
simplicibus, albicantibus, glabris, odoratissimis. Rami alternati, oppositi.
Habitat in Caribæis: succedaneum cortici Peruviano.
Cinchona with alternate leaves, sharp-pointed, entire, smooth, and
veined: between the leaves there is a small stipula pressed to the stem:
flowers axillary, and single, of a whitish colour, smooth, and very sweet-
scented: branches alternate, and opposite.
Native of the Caribæan Islands: as bark, substituted for the Peruvian
species.
REFERENCE TO THE PLATE.
1. A flower spread open.
2. The empalement, seed-bud, and pointal.
3. A capsule.
This specimen of the Cinchona Caribæa, or Jesuits Bark of Jamaica, was
communicated to the author by A. B. Lambert, esq. who raised it from seed,
and with whom it has flowered for the first time in this kingdom. Opinions
are various as to the time and means by which the medicinal virtues of the
Peruvian bark were first discovered; but as the discovery of most very useful
things is generally the effect of chance, Geoffroy’s account of it (as given in
the Medical Botany of Dr. Woodville) is certainly the most natural, who
states it to have been occasioned by some Cinchona trees having been blown
into a pool of water, and lying there till the water became so bitter that
nobody would drink it, till one of the neighbouring inhabitants being seized
with a violent paroxysm of fever, and having no other water, drank of this,
and was perfectly cured. He prevailed on some of his friends, who were ill,
to make use of the same remedy, and it proved successful. But the use of it
was little known till the year 1638, when a signal cure being performed on
the Countess del Cinchon, the lady of a Spanish viceroy at Lima, (from
whom it derives its generic title) it came into general use, and a large
quantity of the bark was by that lady distributed amongst the Jesuits, in
whose hands it increased in reputation, and was by them first introduced into
Europe. The Caribæan species is said to be an excellent substitute for the
Peruvian bark, and therefore a most valuable acquisition to us; as Mr.
Lambert, in his description of the genus Cinchona, informs us that well
grounded fears are entertained of the Peruvian species being some day lost to
us, as, from the extreme decortication they have experienced, they are nearly
extinct in those parts where they were formerly most abundant.
PLATE CCCCLXXXII.

DIANTHUS ALPINUS.
Alpine Pink.
CLASS VI. ORDER II.

D E C A N D R I A D I G Y N I A . Te n C h i v e s . Tw o P o i n t a l s .
GENERIC CHARACTER.
Calyx. Perianthium cylindricum, tubulosum, striatum, persistens: os 5-
dentatum, basi squamulis quatuor cinctum, quarum interdum duæ oppositæ,
inferiores.
Corolla. Petala 5. Ungues longitudine calycis, angusti, receptaculo
inserti: limbus extus planus, laminis late obtusis, crenatis.
Stamina. Filamenta decem, subulata, longitudine calycis: antheræ ovales,
oblongæ, compressæ, incumbentes.
Pistillum. Germen ovale. Styli duo, subulati, staminibus longiores.
Stigmata recurvata, acuminata.
Pericarpium. Capsula cylindrica, recta, unilocularis, quadrilateralis,
apice dehiscens.
Semina plurima, compressa, subrotunda, a receptaculo liberata.
Empalement cylindrical, tubular, lined, remaining: the mouth is 5-
toothed, surrounded by four squamæ at the base, or sometimes two opposite
ones beneath.
Blossom 5 petals: the claws the length of the calyx, narrow, and inserted
into the receptacle: border flat without, broadly obtuse, and notched.
Chives. Ten threads, awl-shaped, the length of the calyx: tips oval,
oblong, compressed, and incumbent.
Pointal. Seed-bud oval. Shafts two, awl-shaped, and longer than the
chives. Summit recurved, and pointed.
Seed-vessel. Capsule cylindrical, straight, one loculament, four-sided,
and splitting at the end.
Seeds many, compressed, nearly round, and freed from the receptacle.
SPECIFIC CHARACTER.
Dianthus alpinus. Frutex pygmæus, elegans, foliis oppositis, alternis,
linearibus, curvatis, brevibus: floribus terminalibus, rubris, in medio circulo
albo.
Alpine pink. An elegant dwarf shrub, with opposite alternate leaves,
linear, curved, and short. Flowers terminal and red, with a small circle of
white in the centre.
REFERENCE TO THE PLATE.
1. The empalement.
2. A petal shown from the under side.
3. Seed-bud, chives, and pointals.
4. The chives spread open.
5. Seed-bud and pointals.
6. The seed-bud cut transversely.
The Dianthus alpinus is a very scarce plant, little known, and rarely to be
met with in any collection, although its beauty renders it deserving a place in
every one, and its size would never exclude it from any. Our figure
represents the entire plant, from the collection of Isaac Swainson, esq. who
raised it from seed which he received from Germany. The only coloured
representation of it extant is in the Flora Austriaca of Jacquin, from a native
specimen, and of no greater magnitude than our figure represents—a
diminutive stature, particularly characteristic of the true Alpine pink.
PLATE CCCCLXXXIII.

D A H L I A P I N N ATA N A N A .
Dwarf Winged-leaved Dahlia.
CLASS XIX. ORDER II.

S Y N G E N E S I A P O LY G A M I A S U P E R F L U A . Ti p s u n i t e d .
Superfluous Pointals.
ESSENTIAL GENERIC CHARACTER.
Calyx duplex. Corolla radiata, radiis lacinias calycis numero æquantibus:
corollulæ pedicellatæ. Receptaculum paleaceum. Stigmata plumosa.
Empalement double. Blossom radiated, with the rays equalling in number
the segments of the empalement: the florets pedicelled. Receptacle chaffy.
Summit plumose.
SPECIFIC CHARACTER.
Dahlia foliis pinnatis: pinnulis quinque, ovatis, acutis, dentatis: floribus
duplicibus: caulis humilis.
Dahlia with pinnated leaves: the pinnules five, ovate, pointed, and
toothed: flowers double: stem low.
REFERENCE TO THE PLATE.
1. One of the radiating florets.
2. A floret of the disk with its scale.
3. A flower spread open.
4. Seed-bud and pointal, summit magnified.
This double-flowered dwarf Dahlia is certainly the most attractive of the
genus. It is supposed to be only a variety of the D. pinnata, but the variation
is almost powerful enough to constitute a species; as, besides the difference
in its flowers, we have never found it arrive to more than half the height of
the pinnata, although we have seen it every autumn for four years in
luxuriant bloom. At present it is a scarce plant, and appears to be not quite so
hardy as the taller species, nor so easily increased. Our figure was made
from a plant in the collection of the Right Hon. Lady Holland, at Holland
House, Kensington.

You might also like