Materials SB: 1+3.3log Log (N)
Materials SB: 1+3.3log Log (N)
Materials SB: 1+3.3log Log (N)
CHAPTER 1
❖ Descriptive data and Inferential Data
❖ Critical thinking
+ Conclusion from small samples
+ Conclusion from non-random samples
+ Conclusion from rare event
+ Poor survey methods
+ Post-hoc fallacy
CHAPTER 2
❖ Level of measurement
+ Nominal measurement
+ Ordinal measurement
+ Interval measurement
+ Ratio measurement
❖ Sampling method
+ Simple random sample
+ Systematic sample
+ Stratified sample Random sampling method
+ Cluster sample
+ Judgment sample
+ Convenience sample Non-random sampling method
+ Focus group
CHAPTER 3
❖ Stem and leaf
❖ Dot plot
❖ Sturges’ Rule: k ¿ 1+3.3 log log (n)
❖ Bin width
❖ Geometric mean: G = √ x 1 x 2 … x n
n
√
n
∑ ❑(x i − x )( y i − y)
i=1
❖ Sample correlation coefficient: r =
√ √∑
n n
∑ ❑(xi − x ) 2 2
❑( y i − y )
i =1 i=1
CHAPTER 5: Probability
P(A ) 1− P ( A )
❖ Odds for A: ; Odds against A:
1− P ( A ) P(A )
❖ General Law of Addition: P ( A ∪ B ) =P ( A )+ P ( B ) − P ( A ∩ B )
P ( A ∩ B)
❖ Conditional probability: P ( B ) = P( B)
λ x e− λ
❖ Poisson approximation for binomial: λ=nπ ; P(X =x) = (when n ≥ 20 ; π ≤ 0.05)
x!
❖ Binomial approximation to the hypergeometric:
n s
when <0.05 → use n as sample size, π=
N N
❖ Geometric: P ( X=x )=π ( 1− π )x − 1;
P ( X ≤ x )=1 −(1− π ) x
σ
❖ Standard error of the sample mean: σ x =
√n
σ
❖ Confidence interval for μ, known σ: x ± z α / 2
√n
s
❖ Confidence interval for μ, unknown σ: x ± t α / 2 with d . f .=n− 1
√n
❖ Standard error of the sample proportion: σ p =
√ π (1 − π )
n
❖ Interpretation:
P - value Interpretation
P > 0,05 No evidence against H0
P < 0.05 Moderate evidence against H0
P < 0.01 Strong evidence against H0
P < 0.001 Very strong evidence against H0
2
2
(n −1) s s2: sample variance
χ calc = ;
σ2 σ2: population variance.
❖ Paired t test: d =
∑ ❑ d i (mean of n differences)
i=1
n
√
2
n
(d − d)
❖ St. Dev of n differences: sd = ∑ ❑ ni −1
i=1
d − μd
❖ Test statistic for paired samples: t calc = sd
√n
❖ Degree of freedom: d . f .=n− 1
❖ The ith paired difference is: Di= X 1 i – X 2 i
sd
❖ Confidence interval for μD: D ±t α /2
√n
( p1 − p2 ) −(π 1 − π 2)
√
❖ Test statistic for equality of proportion: z calc = 1 1 ;
p(1− p)( + )
n1 n 2
x1 + x 2 x1 x2
p= ; p1= ; p2=
n1 + n2 n1 n2
❖ Confidence interval for the difference of two proportions, π 1 − π 2:
( p1 − p2)± z α / 2.
√
2
p1 (1 − p1 ) p2 (1− p 2)
n1
+
n2
s1
❖ Test statistic (two variances): F calc = 2 , with df 1 =n1 – 1, df 2 =n2 – 1
s2
1
❖ F-test: F R =Fdf 1 , df 2; FL =
F df 2 , df 1
❖ Tukey’s test: Tukey is a two-tailed test for equality of paired means from groups
compare simultaneously
H0: μ j=μk
H1: μ j ≠ μk
| y j − y k|
√
Tcalc = 1 1
MSE ( + )
n j nk
Tcalc > Tc,n-c -> Reject H0, Tc,n-c is a critical value. (pg. 449)
❖ Hartley’s Test:
H0: σ 21=σ 22 =…=σ 2c
H1: σ not equal
2
s max
Hcalc = 2 > Hcritical (or Hc,n/c-1) (pg. 451)
s min
CHAPTER 12: Simple Regression
Commonly Used Formulas in Simple Regression
n
∑ ❑( x i − x )( y i − y)
i=1
❖ Sample correlation coefficient: r =
√∑ √∑
n n
2 2
❑( xi − x ) ❑( y i − y )
i =1 i=1
n
√ n −2
1 −r
2 with
d . f .=n− 2, tα/2
∑ ❑( x i − x)( y i − y )
i=1
❖ Slope of fitted regression: b1 = n
∑ ❑( x i − x)2
i=1
i=1 i=1
n
√
n
s
❖ Standard error of the slope: sb =
√
n
, with d . f .=n− 2,
1
∑ ❑(xi − x )2
i =1
b1 − 0
❖ T test for zero slope: tcalc = sb 1
with d . f .=n− 2
√
2
1 (x i − x)
^
y ±t s +
❖ Confidence interval for conditional mean of Y: i α / 2 . n n
∑ ❑(x i − x)2
i=1
√
2
1 (x i − x)
1+ +
❖ Prediction interval for Y: ^y i ±t α / 2 s. n n
∑ ❑( x i − x)2
i=1
Excel Output:
Regression Statistics
R square SSR
R2 =
SST
Standard Error
se =
√ SSE
n− 2
ANOVA
df SS MS F
Regression k (SSR) SSR MSR
MSR =
k MSE
Residual n-k-1 (SSE) MSE =
SSE
n −k −1
Total n-1 (SST)
Coefficient Standard Error T Stat
Intercept (b0) (sb0)
(b0)
Square feet (b1) (sb1) b1 − β 1
(b1) sb
1