Skip to main content Accessibility help
×
Hostname: page-component-78c5997874-mlc7c Total loading time: 0 Render date: 2024-11-04T21:27:37.311Z Has data issue: false hasContentIssue false

2 - Adaptive Markov chain Monte Carlo: theory and methods

from I - Monte Carlo

Published online by Cambridge University Press:  07 September 2011

Yves Atchadé
Affiliation:
University of Michigan
Gersende Fort
Affiliation:
LTCI, CNRS – Telecom ParisTech
Eric Moulines
Affiliation:
LTCI, CNRS – Telecom ParisTech
Pierre Priouret
Affiliation:
Université P. & M. Curie, Paris
David Barber
Affiliation:
University College London
A. Taylan Cemgil
Affiliation:
Boğaziçi Üniversitesi, Istanbul
Silvia Chiappa
Affiliation:
University of Cambridge
Get access

Summary

Image of the first page of this content. For PDF version, please use the ‘Save PDF’ preceeding this image.'
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2011

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

[1] C., Andrieu, A., Jasra, A., Doucet and P., Del Moral. On non-linear Markov chain Monte Carlo via self-interacting approximations. To appear, Bernoulli 2011.
[2] C., Andrieu and E., Moulines. On the ergodicity property of some adaptive MCMC algorithms. Annals of Applied Probability, 16(3):1462–1505, August 2006.Google Scholar
[3] C., Andrieu, E., Moulines and P., Priouret. Stability of stochastic approximation under verifiable conditions. SIAM Journal on Control and Optimization, 44(1):283–312 (electronic), 2005.Google Scholar
[4] C., Andrieu and C., Robert. Controlled Markov chain Monte Carlo methods for optimal sampling. Technical Report 125, Cahiers du Ceremade, 2001.Google Scholar
[5] C., Andrieu and J., Thoms. A tutorial on adaptive MCMC. Statistics and Computing, 18(4):343–373, 2008.Google Scholar
[6] Y., Atchadé. A cautionary tale on the efficiency of some adaptive Monte Carlo schemes. Technical report, ArXiv:0901:1378v1, 2009.
[7] Y., Atchadé and G., Fort. Limit theorems for some adaptive MCMC algorithms with subgeometric kernels. Bernoulli, 16(1):116–154, 2010.Google Scholar
[8] Y. F., Atchadé and J. S., Rosenthal. On adaptive Markov chain Monte Carlo algorithms. Bernoulli, 11(5):815–828, 2005.Google Scholar
[9] Y., Bai. Simultaneous drift conditions for adaptive Markov chain Monte Carlo algorithms. Technical report, University of Toronto, available at www.probability.ca/jeff/ftpdir/yanbai2.pdf, 2009.
[10] Y., Bai, G. O., Roberts and J. S., Rosenthal. On the containment condition for adaptive Markov chain Monte Carlo algorithms. Technical report, University of Toronto, available at www.probability.ca/jeff/, 2009.
[11] A., Benveniste, M., Métivier and P., Priouret. Adaptive Algorithms and Stochastic Approximations, volume 22. Springer, 1990. Translated from the French by Stephen S. S. Wilson.Google Scholar
[12] B., Bercu, P., Del Moral and A., Doucet. A functional central limit theorem for a class of interacting Markov chain Monte Carlo methods. Electronic Journal of Probability, 14(73):2130–2155, 2009.Google Scholar
[13] O., Cappé and E., Moulines. On-line expectation-maximization algorithm for latent data models. Journal of the Royal Statistical Society B, 71(3):593–613, 2009.Google Scholar
[14] O., Cappé, E., Moulines and T., Rydén. Inference in Hidden Markov Models. Springer, 2005.Google Scholar
[15] D., Chauveau and P., Vandekerkhove. Un algorithme de Hastings-Metropolis avec apprentissage séquentiel. Comptes cendus de l'Academie des Sciences Paris Séries I Mathematique, 329(2):173–176, 1999.Google Scholar
[16] D., Chauveau and P., Vandekerkhove. Improving convergence of the Hastings-Metropolis algorithm with an adaptive proposal. Scandinavian Journal of Statistics, 29(1):13–29, 2002.Google Scholar
[17] H-F., Chen. Stochastic Approximation and Its Applications, volume 64 of Nonconvex Optimization and Its Applications. Kluwer Academic Publishers, 2002.Google Scholar
[18] R. V., Craiu, J. S., Rosenthal and C., Yang. Learn from thy neighbor: Parallel-chain adaptive MCMC. Journal of the American Statistical Association, 104(488):1454–1466, 2009.Google Scholar
[19] P., Diaconis, K., Khare and L., Saloff-Coste. Gibbs sampling, exponential families and orthogonal polynomials (with discussion and rejoinder). Statistical Science, 23(2):151–178, 2008.Google Scholar
[20] M., Duflo. Random Iterative Models, volume 34. Springer, 1997. Translated from the 1990 French original by S. S. Wilson and revised by the author.Google Scholar
[21] G., Fort, E., Moulines and P., Priouret. Convergence of interacting MCMC: central limit theorem. Technical report, Institut Telecom/Telecom ParisTech; CNRS/UMR 5181, 2010.
[22] G., Fort, E., Moulines and P., Priouret. Convergence of adaptive and interacting Markov chain Monte Carlo algorithms. Technical report, Institut Telecom/Telecom ParisTech; CNRS/UMR 5141, 2010.
[23] A., Gelman, G. O., Roberts and W. R., Gilks. Effcient Metropolis jumping rules. In Bayesian Statistics, 5 (Alicante, 1994), pages 599–607, Oxford University Press 1996.Google Scholar
[24] S., Geman and D., Geman. Stochastic relaxation, Gibbs distributions and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721–741, 1984.Google Scholar
[25] C. J., Geyer. Markov chain Monte Carlo maximum likelihood. Computing Science and Statistics: Proc. 23rd Symposium on the Interface, Interface Foundation, Fairfax Station, VA, pages 156–163, 1991.Google Scholar
[26] C. J., Geyer and E. A., Thompson. Annealing Markov chain Monte Carlo with applications to ancestral inference. Journal of the American Statistical Association, 90:909–920, 1995.Google Scholar
[27] P., Giordani and R., Kohn. Adaptive independent Metropolis-Hastings by fast estimation of mixtures of normals, 2008.
[28] H., Haario, M., Laine, M., Lehtinen, E., Saksman and J., Tamminen. Markov chain Monte Carlo Adaptive Markov chain Monte Carlo: theory and methods 51 methods for high dimensional inversion in remote sensing. Journal of the Royal Statistical Society B, 66(3):591–607, 2004.Google Scholar
[29] H., Haario, M., Laine, A., Mira and E., Saksman. DRAM: effcient adaptive MCMC. Statistics and Computing, 16:339–354, 2006.Google Scholar
[30] H., Haario, E., Saksman and J., Tamminen. Adaptive proposal distribution for random walk Metropolis algorithm. Computational Statistics, 14:375–395, 1999.Google Scholar
[31] H., Haario, E., Saksman and J., Tamminen. An adaptive Metropolis algorithm. Bernoulli, 7:223–242, 2001.Google Scholar
[32] H., Haario, E., Saksman and J., Tamminen. Componentwise adaptation for high dimensional MCMC. Computational Statistics, 20(2):265–273, 2005.Google Scholar
[33] P., Hall and C. C., Heyde. Martingale Limit Theory and its Application. Academic Press, New York, London, 1980.Google Scholar
[34] A., Jasra, D. A., Stephens and C. C., Holmes. On population-based simulation for static inference. Statistics and Computing, 17(3):263–279, 2007.Google Scholar
[35] J., Keith, D., Kroese and G., Sofronov. Adaptive independence samplers. Statistics and Computing, 18:409–420, 2008.Google Scholar
[36] S. C., Kou, Q., Zhou and W. H., Wong. Equi-energy sampler with applications in statistical inference and statistical mechanics. Annals of Statistics, 34(4):1581–1619, 2006.Google Scholar
[37] H. J., Kushner and G. G., Yin. Stochastic Approximation and Recursive Algorithms and Applications, volume 35. Springer, 2nd edition, 2003.Google Scholar
[38] M., Laine. MCMC toolbox for Matlab, 2008. www.helsinki.fi/mjlaine/mcmc/.
[39] M., Laine and J., Tamminen. Aerosol model selection and uncertainty modelling by adaptive mcmc technique. Atmospheric and Chemistry Physics, 8:7697–7707, 2008.Google Scholar
[40] R., Levine. A note on markov chain Monte-Carlo sweep strategies. Journal of Statistical Computation and Simulation, 75(4):253–262, 2005.Google Scholar
[41] R., Levine and G., Casella. Optimizing random scan Gibbs samplers. Journal of Multivariate Analysis, 97:2071–2100, 2006.Google Scholar
[42] R. A., Levine, Z., Yu, W. G., Hanley and J. A., Nitao. Implementing Random Scan Gibbs Samplers. Computational Statistics, 20:177–196, 2005.Google Scholar
[43] E., Marinari and G., Parisi. Simulated tempering: A new Monte Carlo scheme. Europhysics Letters, 19:451–458, 1992.Google Scholar
[44] S. P., Meyn and R. L., Tweedie. Markov Chains and Stochastic Stability. Cambridge University Press, 2009.Google Scholar
[45] A., Mira. On Metropolis-Hastings algorithms with delayed rejection. Metron, LIX(3-4):231–241, 2001.Google Scholar
[46] C. P., Robert and G., Casella. Monte Carlo Statistical Methods. Springer, 2nd edition, 2004.Google Scholar
[47] G., Roberts and J., Rosenthal. Examples of adaptive MCMC. Journal of Computational and Graphical Statistics 18(2):349–367, 2009.Google Scholar
[48] G. O., Roberts, A., Gelman and W. R., Gilks. Weak convergence and optimal scaling of random walk Metropolis algorithms. Annals of Applied Probability, 7(1):110–120, 1997.Google Scholar
[49] G. O., Roberts and J. S., Rosenthal. Optimal scaling for various Metropolis-Hastings algorithms. Statistical Science, 16(4):351–367, 2001.Google Scholar
[50] G. O., Roberts and J. S., Rosenthal. Coupling and ergodicity of adaptive Markov chain Monte Carlo algorithms. Journal of Applied Probability, 44(2):458–475, 2007.Google Scholar
[51] J. S., Rosenthal. AMCMC: An R interface for adaptive MCMC. Computational Statistics and Data Analysis, 51(12):5467–5470, 2007.Google Scholar
[52] J. S., Rosenthal. Optimal proposal distributions and adaptive MCMC. In MCMC Handbook. Chapman & Hall/CRC Press, 2009.Google Scholar
[53] R. Y., Rubinstein and D. P., Kroese. The Cross-Entropy Method. Springer, 2004.Google Scholar
[54] E., Turro, N., Bochkina, A-M., Hein and S., Richardson. Bgx: a Bioconductor package for the Bayesian integrated analysis of Affymetrix Genechips. BMC Bioinformatics, 8(1):439, 2007.Google Scholar

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×