paint-brush
Using Autodiff to Estimate Posterior Moments, Marginals and Samples: Methodsby@bayesianinference

Using Autodiff to Estimate Posterior Moments, Marginals and Samples: Methods

by Bayesian InferenceApril 15th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Importance weighting allows us to reweight samples drawn from a proposal in order to compute expectations of a different distribution.
featured image - Using Autodiff to Estimate Posterior Moments, Marginals and Samples: Methods
Bayesian Inference HackerNoon profile picture

This paper is available on arxiv under CC 4.0 license.

Authors:

(1) Sam Bowyer, Equal contribution, Department of Mathematics and [email protected];

(2) Thomas Heap, Equal contribution, Department of Computer Science University of Bristol and [email protected];

(3) Laurence Aitchison, Department of Computer Science University of Bristol and [email protected].

Methods

Of course, the contributions of this paper are not in computing the unbiased marginal likelihood estimator, which previously has been used in learning general probabilistic models, but instead our major contribution is a novel approach to computing key quantities of interest in Bayesian computation by applying the source term trick to the massively parallel marginal likelihood estimator. In particular, in the following sections, we outline in turn how to compute posterior expectations, marginals and samples.






Figure 1: Results obtained in the MovieLens model. Columns a–c show the evidence lower bound, predictive log-likelihood and variance in the estimator of zm using the true MovieLens100K data. Column d shows the mean squared error in the estimator of zm when the data is sampled from the model and thus the true value of zm is known. The error bars in the top row represent the standard-deviation across different dataset splits.