The multiple-try method and local optimization in Metropolis sampling

JS Liu, F Liang, WH Wong - Journal of the American Statistical …, 2000 - Taylor & Francis
Journal of the American Statistical Association, 2000Taylor & Francis
This article describes a new Metropolis-like transition rule, the multiple-try Metropolis, for
Markov chain Monte Carlo (MCMC) simulations. By using this transition rule together with
adaptive direction sampling, we propose a novel method for incorporating local optimization
steps into a MCMC sampler in continuous state-space. Numerical studies show that the new
method performs significantly better than the traditional Metropolis-Hastings (MH) sampler.
With minor tailoring in using the rule, the multiple-try method can also be exploited to …
Abstract
This article describes a new Metropolis-like transition rule, the multiple-try Metropolis, for Markov chain Monte Carlo (MCMC) simulations. By using this transition rule together with adaptive direction sampling, we propose a novel method for incorporating local optimization steps into a MCMC sampler in continuous state-space. Numerical studies show that the new method performs significantly better than the traditional Metropolis-Hastings (M-H) sampler. With minor tailoring in using the rule, the multiple-try method can also be exploited to achieve the effect of a griddy Gibbs sampler without having to bear with griddy approximations, and the effect of a hit-and-run algorithm without having to figure out the required conditional distribution in a random direction.
Taylor & Francis Online