Flow Network Based Generative Models for Non-Iterative Diverse Candidate Generation - 2021

Details

Title : Flow Network Based Generative Models for Non-Iterative Diverse Candidate Generation Author(s): Bengio, Emmanuel and Jain, Moksh and Korablyov, Maksym and Precup, Doina and Bengio, Yoshua Link(s) :

Rough Notes

This work presents Generative Flow Networks (GFlowNets), a class of generative models that generate compositional objects (such as graphs, sets etc), which are sampled in proportion to some reward function defined on theses objects. This is contrast to Reinforcement Learning (RL) where the policy is selected to maximize some reward function.

One application is for black-box optimization, for e.g. in drug-discovery where sampling diverse candidates is crucial, and batches with many potential candidates are sent in each round.

In many cases the reward function is noisy and expensive, and the typical solution is to use existing data of candidates and their reward values to fit a proxy function \(f\) via supervised learning.

Compared to batch Bayesian Optimization (, ), the computational cost of GFlowNets is linear in the batch size.

GFlowNets work in the episodic setting of RL, as the reward is constructed to be 0 for intermediate states except in a terminal state. Each (discrete) action modifies the state to construct the compositional object, and the state transitions are deterministic. This is characterized by a flow network - a weighted Directed Acyclic Graph (DAG) where for each node the incoming flows (weights) equal the outgoing flows. The action from state \(s_1\) to the next \(s_2\) is taken in proportion to the outgoing flows of \(s_1\).

The approach is also off-policy i.e. the training trajectories can come from a different policy.

"The proposed algorithm is inspired by Bellman updates and converges when the incoming and outgoing flow into and out of each state match".

Emacs 29.4 (Org mode 9.6.15)