Please use this identifier to cite or link to this item:
Quiroz, Matias
Villani, Mattias
Kohn, Robert
Year of Publication: 
Series/Report no.: 
Sveriges Riksbank Working Paper Series 297
The computing time for Markov Chain Monte Carlo (MCMC) algorithms can be prohibitively large for datasets with many observations, especially when the data density for each observation is costly to evaluate. We propose a framework where the likelihood function is estimated from a random subset of the data, resulting in substantially fewer density evaluations. The data subsets are selected using an efficient Probability Proportional-to-Size (PPS) sampling scheme, where the inclusion probability of an observation is proportional to an approximation of its contribution to the log-likelihood function. Three broad classes of approximations are presented. The proposed algorithm is shown to sample from a distribution that is within O(m-1/2) of the true posterior, where m is the subsample size. Moreover, the constant in the O(m-1/2) error bound of the likelihood is shown to be small and the approximation error is demonstrated to be negligible even for a small m in our applications. We propose a simple way to adaptively choose the sample size m during the MCMC to optimize sampling efficiency for a fixed computational budget. The method is applied to a bivariate probit model on a data set with half a million observations, and on a Weibull regression model with random effects for discrete-time survival data.
Bayesian inference
Markov Chain Monte Carlo
Pseudo-marginal MCMC
Big Data
Probability Proportional-to-Size sampling
Numerical integration
Document Type: 
Working Paper

Files in This Item:
665.29 kB

Items in EconStor are protected by copyright, with all rights reserved, unless otherwise indicated.