Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design - 2021

Details

Title : Deep Adaptive Design: Amortizing Sequential Bayesian Experimental Design Author(s): Foster, Adam and Ivanova, Desi R. and Malik, Ilyas and Rainforth, Tom Link(s) : https://proceedings.mlr.press/v139/foster21a.html

Rough Notes

Bayesian Experimental Design (BED) is concerned with choosing designs \(\xi\) to generate observations \(y\) which maximize information about some latent quantity \(\theta\) of interest. This involves having a prior \(p(\theta)\) and a likelihood \(y_t \sim p(y|\theta,\xi_t)\) where \(\xi_t\) is the design chosen at time \(t\). The myopic BED approach is to select \(\xi\) that maximizes \(\mathbb{E}_{p(y_t|\xi_t)}[p(\theta|h_{t-1}) - p(\theta|h_t)]\) (also called the EIG) where \(h_t\) is the history of designs and observations until time \(t\). This quantity however is extremely intractable.

An alternative approach is to make a design policy that generates designs given the history \(h_t\) via a single forward pass.

The training objective for the policy network could be the sum of the EIG terms \(\mathbb{E}[\sum_{t=1}^T EIG_t]\), which however is still intractable. The authors instead optimize a lower bound, in what they call Sequential Prior Contrastive Estimation.

One drawback here is that DAD needs to backprop through the design, hence cannot be applied if the designs are discrete e.g. in an active learning setting where there is a discrete set of points needed to label.

Emacs 29.4 (Org mode 9.6.15)