Expectation Propagation (EP)
Expectation Propagation (EP) is an algorithm for approximating an un-normalized probability density \(p\) by a simpler parametric distribution \(\mathcal{Q}\). It is often in Bayesian statistics for approximate inference.
It minimizes the KL divergence \(\text{KL}(p||\mathcal{Q})\), in contrast to Variation Inference (VI) which minimizes \(\text{KL}(\mathcal{Q}||p)\).
\(\mathcal{Q}\) is restricted to distributions that are closed under products - the exponential family, i.e. \(\mathcal{Q}\) is of the form \(\mathcal{Q}(\mathbf{z})=\exp(\bm{\eta}^T\mathbf{u}(\mathbf{z})-g(\bm{\eta}))\), where \(g(\bm{\eta})=\log \int \exp(\bm{\eta}^T\mathbf{u}(\mathbf{z})) d\mathbf{z}\).
For such \(\mathcal{Q}\) we have that \[ \text{KL}(p||\mathcal{Q})=\int p(\mathbf{z})\log \frac{\mathcal{Q}(\mathbf{z})}{p(\mathbf{z})}d\mathbf{z}\\ =g(\bm{\eta})-\bm{\eta}^T \mathbb{E}_p[u(\mathbf{z})] + \text{constant} \]
When the gradient of the above KL divergence with respect to the natural parameters \(\bm{\eta}\) is 0, we have \(\frac{\partial g(\bm{\eta})}{\partial \bm{\eta})} = \mathbb{E}_p[\mathbf{u}(\mathbf{z})]\),
EP generalizes Loopy Belief Propagation to graphical models which may be discrete.