Abstract
As the number of applications for Markov Chain Monte Carlo (MCMC) grows, the power of these methods as well as their shortcomings become more apparent. While MCMC yields an almost automatic way to sample a space according to some distribution, its implementations often fall short of this task as they may lead to chains which converge too slowly or get trapped within one mode of a multi-modal space. Moreover, it may be difficult to determine if a chain is only sampling a certain area of the space or if it has indeed reached stationarity.
In this paper, we show how a simple modification of the proposal mechanism results in faster convergence of the chain and helps to circumvent the problems described above. This mechanism, which is based on an idea from the field of “small-world” networks, amounts to adding occasional “wild” proposals to any local proposal scheme. We demonstrate through both theory and extensive simulations, that these new proposal distributions can greatly outperform the traditional local proposals when it comes to exploring complex heterogenous spaces and multi-modal distributions. Our method can easily be applied to most, if not all, problems involving MCMC and unlike many other remedies which improve the performance of MCMC it preserves the simplicity of the underlying algorithm.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Chib S. and Greenberg E. 1995, Understanding the M etropolis-Hastings algorithm. The A merican statistician 49 : 327–335.
Chib S., Greenberg E., and Winkelmann R. 1998, Posterior simulation and B ayes factors in panel count data models. Journal of Econometrics 86 : 33–54.
Gamerman D. 1997, M arkov C hain M onte C arlo. Chapman & Hall, London.
Geyer C. J. 1991, M arkov chain M onte C arlo maximum likelihood. In: E. M. Keramides (Ed.): Computing Science and Statistics: Proceedings of the 23rd Symposium on the Interface . Interface Foundation, Fairfax Station, pp. 156–163.
Geyer C. J. 1992, Practical M arkov chain M onte C arlo. Statist. Sci. 7 : 473–483.
Gilks W. R., Richardson S., and Spiegelhalter D. J. 1996, M arkov C hain M onte C arlo in practice. Chapman & Hall, London, 1st edition.
Hastings W. K. 1970, Monte Carlo sampling methods using Markov chains and their applications. Biometrika 57 : 97–109.
Jarner S. F. and Roberts G. O. 2001, Convergence of heavy tailed MCMC algorithms (preprint).
Larget B. and Simon D. 1999, Markov chain Monte Carlo algorithms for the Bayesian analysis of phylogenetic trees. Mol. Biol. Evol. 16 : 750–759.
Meyn S. P. and Tweedie R. L. 1996, Markov Chains and Stochastic Stability . Springer, New York.
Press W. H., Flannery B. P., Teukolsky S. A., and Vetterling W. T. 1992, Numerical Recipes in C : The Art of Scientific Computing. Cambridge University Press, Cambridge.
Rice J. A. 1994, Mathematical statistics and data analysis . Duxbury Press, Pacific Grove, 2nd edition.
Roberts G. O., Gelman A., and Gilks W. R. 1997, Weak convergence and optimal scaling of random walk M etropolis algorithms. The Annals of Applied Probability 7 : 110–120.
Tierney L. 1994, M arkov chains for exploring posterior distributions. The Annals of Statistics 22 : 1701–1762.
Watts D. J. and Strogatz S. H. 1998, Collective dynamics of ‘small-world’ networks. Nature 393 : 440–442.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Guan, Y., Fleißner, R., Joyce, P. et al. Markov Chain Monte Carlo in small worlds. Stat Comput 16, 193–202 (2006). https://doi.org/10.1007/s11222-006-6966-6
Received:
Accepted:
Issue Date:
DOI: https://doi.org/10.1007/s11222-006-6966-6