Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

MCMC MCMC: Congratulations! You Passed!

Download as pdf or txt
Download as pdf or txt
You are on page 1of 1

MCMC Due Jan 18, 1:59 AM CST

Graded Quiz • 45 min

Module Overview
Congratulations! You passed! GRADE

Reading: Module 2
QUIZ • 45 MIN TO PASS 80% or higher
Keep Learning
80%
assignments and materials

MCMC
3 min

4. Metropolis-Hastings

Video: Algorithm
9 min
MCMC
Video: Demonstration LATEST SUBMISSION GRADE

10 min
80%
Video: Random walk Submit your assignment
Try again
example, Part 1 DUE DATE Jan 18, 1:59 AM CST ATTEMPTS 4 every 8 hours
12 min 1. For Questions 1 through 3, consider the following model for data that take on values between 0 and 1: 1 / 1 point

Video: Random walk iid


xi ∣ α, β ∼ Beta(α, β) , i = 1, … , n ,
example, Part 2 Receive grade
α Grade
∼ Gamma(a, b) ,View Feedback
16 min
TO PASS 80 % or higher β 80%
∼ Gamma(r, s) ,
Quiz: Lesson 4 We keep your highest score

8 questions where α and β are independent a priori. Which of the following gives the full conditional density for α up to
proportionality?
Reading: Code for Lesson 4
Γ(α+β)n α−1
p(α ∣ β, x) ∝ Γ(α)n
[∏ni=1 xi ] αa−1 e−bα I(0<α<1)
JAGS
α−1
Video: Download, install, p(α ∣ β, x) ∝ [∏ni=1 xi ] αa−1 e−bα I(α>0)
setup
Γ(α+β)n α−1 β−1
3 min p(α ∣ β, x) ∝ Γ(α)n Γ(β)n
[∏ni=1 xi ] [∏ni=1 (1 − xi )] αa−1 e−bα β r−1 e−sβ I(0<α<1) I(0<β<1)
Video: Model writing, Γ(α+β)n α−1
running, and post-
p(α ∣ β, x) ∝ Γ(α)n
[∏ni=1 xi ] αa−1 e−bα I(α>0)
processing
12 min

Correct
Reading: Alternative MCMC
software When we treat the data and β as known constants, the full joint distribution of all quantities x, α, and β is
10 min proportional to this expression when viewed as a function of α.

Reading: Code from JAGS


introduction

5. Gibbs Sampling 2. Suppose we want posterior samples for α from the model in Question 1. What is our best option? 1 / 1 point

Video: Multiple parameter


sampling and full The full conditional for α is proportional to a common distribution which we can sample directly, so we can draw
conditional distributions from that.
8 min
The full conditional for α is not a proper distribution (it doesn't integrate to 1), so we cannot sample from it.
Video: Conditionally
conjugate prior example The joint posterior for α and β is a common probability distribution which we can sample directly. Thus we can
with Normal likelihood draw Monte Carlo samples for both parameters and keep the samples for α.
10 min
The full conditional for α is not proportional to any common probability distribution, and the marginal posterior for
Video: Computing example
β is not any easier, so we will have to resort to a Metropolis-Hastings sampler.
with Normal likelihood
16 min

Quiz: Lesson 5 Correct


8 questions
Another option is to approximate the posterior distribution for α by considering a set of discrete values such
Reading: Code for Lesson 5 as 0.1, 0.2, … , 0.9 etc. You could use a discrete uniform prior, or discrete prior probabilities proportional to
10 min the beta prior evaluated at these specific values. Either way, the full conditional distribution for α looks like the
discrete version of Bayes' theorem, which is easy to compute.
6. Assessing Convergence

Video: Trace plots,


autocorrelation
17 min 3. If we elect to use a Metropolis-Hastings algorithm to draw posterior samples for α, the Metropolis-Hastings candidate 1 / 1 point
acceptance ratio is computed using the full conditional for α as
Reading: Autocorrelation
10 min α∗ ∗
Γ(α)n Γ(α∗ +β)n [∏i=1
n
xi ] α∗ a−1 e−bα q(α∗ ∣α)Iα∗ >0
Γ(α∗ )n Γ(α+β)n [∏ni=1 xi ] αa−1 e−bα q(α∣α∗ )Iα>0
α
Video: Multiple chains,
burn-in, Gelman-Rubin
diagnostic where α∗ is a candidate value drawn from proposal distribution q(α∗ ∣α). Suppose that instead of the full conditional for
8 min α, we use the full joint posterior distribution of α and β and simply plug in the current (or known) value of β . What is the
Metropolis-Hastings ratio in this case?
Quiz: Lesson 6
8 questions α∗ −1 ∗
Γ(α∗ +β)n [∏i=1 xi ] [∏i=1 (1−xi )]
β−1
n n
α∗ a−1 e−bα β r−1 e−sβ q(α∣α∗ )I(0<α∗ ) I(0<β)
Γ(α∗ )n Γ(β)n q(α∗ ∣α)
Reading: Code for Lesson 6
α∗ ∗
Γ(α)n Γ(α∗ +β)n [∏ni=1 xi ] α∗ a−1 e−bα q(α∗ ∣α)Iα∗ >0
MCMC Γ(α∗ )n Γ(α+β)n [∏ni=1 xi ] αa−1 e−bα q(α∣α∗ )Iα>0
α

Honors Quiz: MCMC ∗


α∗ a−1 e−bα q(α∗ ∣α)Iα∗ >0
5 questions αa−1 e−bα q(α∣α∗ )Iα>0

α∗
Γ(α)n Γ(α∗ +β)n [∏ni=1 xi ] q(α∗ ∣α)Iα∗ >0
Γ(α∗ )n Γ(α+β)n [∏ni=1 xi ] q(α∣α∗ )Iα>0
α

Correct

All of the terms involving only β are identical in the numerator and denominator, and thus cancel out. The
acceptance ratio is the same whether we use the full joint posterior or the full conditional in a Gibbs sampler.

4. For Questions 4 and 5, re-run the Metropolis-Hastings algorithm from Lesson 4 to draw posterior samples from the model 1 / 1 point
for mean company personnel growth for six new companies: (-0.2, -1.5, -5.3, 0.3, -0.8, -2.2). Use the same prior as in the
lesson.

Below are four possible values for the standard deviation of the normal proposal distribution in the algorithm. Which one
yields the best sampling results?

0.5

1.5

3.0

4.0

Correct

The candidate acceptance rate for this proposal distribution is about 0.3 which yields good results.

5. Report the posterior mean point estimate for μ, the mean growth, using these six data points. Round your answer to two 0 / 1 point
decimal places.

.015

Incorrect

After running the code provided in Lesson 4, approximate the posterior mean of μ by taking the average of the
MCMC samples.

You might also like