This Title All WIREs
How to cite this WIREs title:
WIREs Comp Stat

Accelerating MCMC algorithms

Full article on Wiley Online Library:   HTML PDF

Can't access this content? Tell your librarian.

Markov chain Monte Carlo algorithms are used to simulate from complex statistical distributions by way of a local exploration of these distributions. This local feature avoids heavy requests on understanding the nature of the target, but it also potentially induces a lengthy exploration of this target, with a requirement on the number of simulations that grows with the dimension of the problem and with the complexity of the data behind it. Several techniques are available toward accelerating the convergence of these Monte Carlo algorithms, either at the exploration level (as in tempering, Hamiltonian Monte Carlo and partly deterministic methods) or at the exploitation level (with Rao–Blackwellization and scalable methods). This article is categorized under: Statistical and Graphical Methods of Data Analysis > Markov Chain Monte Carlo (MCMC) Algorithms and Computational Methods > Algorithms Statistical and Graphical Methods of Data Analysis > Monte Carlo Methods
Markov chains produced by an adaptive algorithm where the proposal distribution is a Gaussian distribution with mean and variance computed from the past simulations of the chain. The three rows correspond to different initial distributions. The fit of the histogram of the resulting MCMC sample is poor, even for the most spread‐out initial distribution (bottom). Source: Robert and Casella ()
[ Normal View | Magnified View ]
Unnormalized tempered target densities of a bimodal Gaussian mixture using inverse temperature levels β = {1, .1, .05, .005}, respectively. At the hot state (bottom right) it is evident that the mode centred on 40 begins to dominate the weight as β increases to even though at the cold state it was only attributable for a fraction (.2) of the total mass
[ Normal View | Magnified View ]
Percentage of numbers of data points used in each iteration of the confidence sampler with a single 2nd‐order Taylor approximation at θMAP. The plots describe 10,000 iterations of the confidence sampler for the posterior distribution of the mean and variance of a unidimensional normal distribution with a flat prior: (left) 10,000 observations are generated from , (right) 10,000 observations are generated from . Source: Bardenet et al. ()
[ Normal View | Magnified View ]
Elapsed time when drawing 10,000 MCMC samples with different amounts of data under the single machine and consensus Monte Carlo algorithms for a hierarchical Poisson regression. The horizontal axis represents the amounts of data. The single machine algorithm stops after 30 because of the explosion in computation budget. Source: Scott et al. ()
[ Normal View | Magnified View ]
Comparisons between random‐walk Metropolis‐Hastings, Gibbs sampling, and NUTS algorithm of samples corresponding to a highly correlated 250‐dimensional multivariate Gaussian target. Similar computation budgets are used for all methods to produce the 1,000 samples on display. Source: Hoffman and Gelman ()
[ Normal View | Magnified View ]
A comparison of an ensemble MCMC approach with a regular adaptive MCMC algorithm (lower line) and a static importance sampling approach, in terms of mean square error (MSE), for a fixed total number of likelihood evaluations, where N denotes the size of the ensemble. Source: Martino ()
[ Normal View | Magnified View ]

Browse by Topic

Statistical and Graphical Methods of Data Analysis > Monte Carlo Methods
Algorithms and Computational Methods > Algorithms
Statistical and Graphical Methods of Data Analysis > Markov Chain Monte Carlo (MCMC)

Access to this WIREs title is by subscription only.

Recommend to Your
Librarian Now!

The latest WIREs articles in your inbox

Sign Up for Article Alerts