Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Statistics For Management and Economics, Sixth Edition: Formulas

Download as pdf or txt
Download as pdf or txt
You are on page 1of 15

Statistics for Management and Economics, Sixth Edition

Formulas

Numerical Descriptive Techniques

Population mean

∑ xi
i= 1
m =
N
Sample mean

∑ xi
i =1
x=
n
Range

Largest observation - Smallest observation

Population variance

∑ ( xi
2
− m)
2 i= 1
s =
N
Sample variance

∑ (x i − x )
2

s2 = i= 1

n−1
Population standard deviation

2
s = s

Sample standard deviation

s= s2

Population covariance
N

∑ ( x i − m x )( y i − m y )
i= 1
COV(X,Y) =
N
Sample covariance

∑ ( xi − x )( y i − y )
i= 1
cov(x,y) =
n−1
Population coefficient of correlation

COV ( X , Y )
r=
s xs y

Sample coefficient of correlation

cov( x, y )
r=
sx s y

Least Squares: Slope coefficient

cov( x, y )
b1 =
s x2

Least Squares: y-Intercept

b0 = y − b1 x

Probability

Conditional probability

P(A|B) = P(A and B)/P(B)

Complement rule

C
P( A ) = 1 – P(A)

Multiplication rule

P(A and B) = P(A|B)P(B)

Addition rule

P(A or B) = P(A) + P(B) - P(A and B)


Random Variables and Discrete Probability Distributions

Expected value (mean)

E(X) = µ = ∑ xp ( x )
all _ x

Variance

∑ (x − µ )
2 2
V(x) = σ = p( x )
all _ x

Standard deviation

2
s = s

Covariance

COV(X, Y) = ∑ ( x − m x )( y − m y ) p( x, y )

Coefficient of Correlation

COV ( X ,Y )
r =
s xs y

Laws of expected value

1. E(c) = c

2. E(X + c) = E(X) + c

3. E(cX) = cE(X)

Laws of variance

1.V(c) = 0

2. V(X + c) = V(X)

3. V(cX) = c 2 V(X)

Laws of expected value and variance of the sum of two variables

1. E(X + Y) = E(X) + E(Y)

2. V(X + Y) = V(X) + V(Y) + 2COV(X, Y)


Laws of expected value and variance for the sum of more than two independent variables

k k
1. E (∑ X i ) = ∑ E( Xi )
i=1 i=1

k k
2. V ( ∑ Xi ) = ∑ V ( Xi)
i=1 i=1

Mean and variance of a portfolio of two stocks

E(Rp ) = w1 E(R1 ) + w2 E(R2 )

V(Rp ) = w12 V(R1 ) + w22 V(R2 ) + 2 w1 w2 COV(R1 , R2 )

= w12 s 12 + w22 s 22 + 2 w1 w2 r s 1 s 2

Mean and variance of a portfolio of k stocks

k
E(Rp ) = ∑ wi E ( Ri )
i=1

k k k

∑ wi s i + 2∑ ∑ wi wj COV ( Ri , R j )
2 2
V(Rp ) =
i=1 i= 1 j = i + 1

Binomial probability

n!
P(X = x) = p x (1 − p) n − x
x! ( n − x )!

m= np

2
s = np(1 − p )

s = np (1 − p)

Poisson probability

e−m mx
P(X = x) =
x!
Continuous Probability Distributions

Expected value of the sample mean

E ( X ) = mx = m

Variance of the sample mean

2
2 s
V(X) = sx =
n

Standard error of the sample mean

s
sx =
n

Standardizing the sample mean

X −m
Z=
s/ n

Expected value of the sample proportion

E (Pˆ ) = mpˆ = p

Variance of the sample proportion

2 p (1 − p )
V ( Pˆ ) = s ˆp =
n

Standard error of the sample proportion

p (1 − p )
s pˆ =
n

Standardizing the sample proportion

Pˆ − p
Z=
p (1 − p ) n

Expected value of the difference between two means

E ( X 1 − X 2 ) = mx1 − x 2 = m1 − m2

Variance of the difference between two means

2 2
2 s1 s2
V ( X 1 − X 2 ) = s x1 − x2 = +
n1 n2
Standard error of the difference between two means

2 2
s1 s2
sx −x = +
1 2
n1 n2

Standardizing the difference between two sample means

( X 1 − X 2 ) − ( m1 − m2 )
Z=
2 2
s1 s2
+
n1 n2

Introduction to Estimation

Confidence interval estimator of m

s
x ± za / 2
n

Sample size to estimate m

2
 za / 2 s 
n = 
 W 

Introduction to Hypothesis Testing

Test statistic for m

x−m
z=
s / n

Inference about One Population

Test statistic for m

x−m
t=
s/ n

Interval estimator of m

s
x ± ta / 2
n
2
Test statistic for s

2 ( n − 1) s 2
c = 2
s

2
Interval Estimator of s

(n − 1) s 2
LCL = 2
ca /2

( n − 1) s 2
UCL = 2
c 1− a /2

Test statistic for p

pˆ − p
z=
p (1 − p) / n

Interval estimator of p

pˆ ± z a / 2 pˆ (1 − pˆ ) / n

Sample size to estimate p

2
 z a / 2 pˆ (1 − pˆ ) 
n = 
 W 
 

Inference about Two Populations

Equal-variances t-test of m1 − m 2

( x1 − x2 ) − ( m1 − m2 )
t= n = n1 + n2 − 2
 1 1 
s 2p  + 

 n1 n 2 

Equal-variances interval estimator of m1 − m 2

 1 1
( x1 − x 2 ) ± t a / 2 s 2p  +  n = n1 + n2 − 2
 n1 n2 
Unequal-variances t-test of m1 − m 2

( x1 − x2 ) − ( m1 − m2 ) ( s12 / n1 + s 22 / n 2 ) 2
t= n =
 s12 s 22   ( s12 / n1 ) 2 ( s 22 / n 2 ) 2 
 +   + 
  n −1 
 n1 n 2   1 n2 − 1 

Unequal-variances interval estimator of m1 − m 2

s12 s 22 ( s12 / n1 + s 22 / n 2 ) 2
( x1 − x 2 ) ± t a / 2 + n =
n1 n 2  ( s12 / n1 ) 2 ( s 22 / n 2 ) 2 
 + 
 n −1 n2 − 1 
 1 

t-Test of m D

xD − mD
t= n = nD − 1
sD / nD

t-Estimator of m D

sD
x D ± ta / 2 n = nD − 1
nD

2
F-test of s 1 / s 22

s12
F= n1 = n1 − 1 and n 2 = n2 − 1
s 22

2
F-Estimator of s 1 / s 22

 s12  1
LCL =  
 s 22  Fa / 2 ,n 1 ,n 2
 

s  2 
UCL =  12 F
s  a / 2 ,n 2 ,n 1
 2 

z-Test and estimator of p1 − p 2

( pˆ 1 − pˆ 2 )
Case 1: z=
 1 1 
pˆ (1 − pˆ ) + 
 n1 n2 
( pˆ 1 − pˆ 2 ) − ( p1 − p 2 )
Case 2: z=
pˆ 1 (1 − pˆ 1 ) pˆ 2 (1 − pˆ 2 )
+
n1 n2

z-Interval estimator of p1 − p 2

pˆ 1 (1 − pˆ 1 ) pˆ 2 (1 − pˆ 2 )
( pˆ 1 − pˆ 2 ) ± za / 2 +
n1 n2

Analysis of Variance

One-Way Analysis of variance

k
SST = ∑ nj (xj − x)2
j= 1

k nj

∑ ∑ ( x ij − x j )
2
SSE =
j= 1 i =1

SST
MST =
k −1

SSE
MSE =
n− k

MST
F=
MSE
Two-way analysis of Variance (randomized block design of experiment)

k b

∑ ∑ ( x ij − x )
2
SS(Total) =
j= 1 i =1

k
SST = ∑ b( x [T ] j − x ) 2
i=1

b
SSB = ∑ k ( x[ B] i − x ) 2
i= 1

k b

∑ ∑ ( x ij − x [T ] j − x [ B] i + x )
2
SSE =
j= 1 i =1
SST
MST =
k −1

SSB
MSB =
b −1

SSE
MSE =
n− k −b+1

MST
F=
MSE

MSB
F=
MSE
Two-factor experiment

a b r

∑ ∑ ∑ ( x ijk − x )
2
SS(Total) =
i = 1 j = 1 k =1

a
SS(A) = rb ∑ ( x [ A] i − x ) 2
i =1

b
SS(B) = ra ∑ ( x[ B ] j − x ) 2
j= 1

a b
SS(AB) = r ∑ ∑ ( x [ AB]ij − x[ A]i − x[ B] j + x )2
i =1 j =1

a b r
SSE = ∑ ∑ ∑ ( xijk − x[ AB]ij ) 2
i= 1 j= 1 k = 1

MS ( A)
F=
MSE

MS ( B)
F=
MSE

MS ( AB)
F=
MSE
Least Significant Difference Comparison Method

 1 1 
LSD = t a / 2 MSE  +
 ni n j 

Tukey’s multiple comparison method

MSE
w= q a ( k ,n )
ng

Chi-Squared Tests

Test statistic for all procedures

k
( f i − ei ) 2

2
c =
i =1 ei

Nonparametric Statistical Techniques

Wilcoxon rank sum test statistic

T = T1

n1 ( n1 + n 2 + 1)
E(T) =
2

n1 n 2 (n1 + n 2 + 1)
sT =
12

T − E (T )
z=
sT

Sign test statistic

x = number of positive differences

x − .5n
z=
.5 n

Wilcoxon signed rank sum test statistic

T = T+
n( n + 1)
E(T) =
4

n( n + 1)( 2n + 1)
sT =
24

T − E (T )
z=
sT

Kruskal-Wallis Test

 12 k T 2


j
H = − 3( n + 1)
 n (n + 1) j = 1 n j 

Friedman Test

 12 k

Fr =  ∑ T j2  − 3b (k + 1)
 b( k )( k + 1) j =1 

Simple Linear Regression

Sample slope

cov( x , y)
b1 =
s 2x

Sample y-intercept

b0 = y − b1 x

Sum of squares for error

∑ ( y i − yˆ i )
2
SSE =
i= 1

Standard error of estimate

SSE
se =
n− 2

Test statistic for the slope

b1 − b 1
t=
s b1
Standard error of b1

se
s b1 =
( n − 1) s 2x

Coefficient of determination

[cov( x , y )]2 SSE


R2 = = 1−
s 2x s 2y ∑ ( yi − y) 2
Prediction interval

2
1 ( xg − x )
yˆ ± t a / 2 , n − 2 s e 1 + +
n ( n − 1) s x2

Confidence interval estimator of the expected value of y

2
1 ( x g − x)
yˆ ± t a / 2 , n − 2 s e +
n ( n − 1) s x2

Sample coefficient of correlation

cov( x , y )
r=
sxsy

Test statistic for testing r = 0

n− 2
t=r
1− r 2

Sample Spearman rank correlation coefficient

cov( a, b)
rS =
sa sb

Test statistic for testing r S = 0 when n > 30

rS − 0
z= = rS n − 1
1/ n − 1
Multiple Regression

Standard Error of Estimate

SSE
se =
n−k−1

Test statistic for b i

bi − b i
t=
sb i

Coefficient of Determination

[cov( x , y )]2 SSE


R2 = = 1−
s 2x s 2y ∑ ( yi − y) 2
Adjusted Coefficient of Determination

SSE /( n − k − 1)
Adjusted R2 = 1−
∑ ( y i − y ) /( n − 1)
2

Mean Square for Error

MSE = SSE/k

Mean Square for Regression

MSR = SSR/(n-k-1)

F-statistic

F = MSR/MSE

Durbin-Watson statistic

∑ (ei − ei 1 ) 2

i= 2
d = n

∑ ei2
i= 1

Time Series Analysis and Forecasting

Exponential smoothing

S t = wyt + (1 − w) S t −1
Statistical Process Control

Centerline and control limits for x chart using S

Centerline = x

S
Lower control limit = x − 3
n

S
Upper control limit = x + 3
n

Centerline and control limits for the p chart

Centerline = p

p (1 − p )
Lower control limit = p − 3
n

p (1 − p )
Upper control limit = p + 3
n

Decision Analysis

Expected Value of perfect Information

EVPI = EPPI - EMV*

Expected Value of Sample Information

EVSI = EMV' - EMV*

You might also like