2021 | George Papamakarios*, Eric Nalisnick*, Danilo Jimenez Rezende, Shakir Mohamed, Balaji Lakshminarayanan
Normalizing flows provide a general mechanism for defining expressive probability distributions by transforming a simple base distribution through a series of invertible, differentiable transformations. This paper reviews normalizing flows, emphasizing their role in probabilistic modeling and inference. It discusses the fundamental principles of flow design, including expressive power, computational trade-offs, and their relationship to general probability transformations. The review also covers applications in generative modeling, approximate inference, and supervised learning.
The paper begins by defining normalizing flows as a method to construct flexible probability distributions over continuous random variables. It explains how flows work by transforming a base distribution through a series of invertible transformations, allowing for complex, multi-modal distributions. The key property of flows is that they are invertible and differentiable, enabling the computation of densities via the Jacobian determinant.
The paper then explores the expressive power of flow-based models, showing that they can represent any distribution under certain conditions. It discusses the use of flows for modeling and inference, including the forward and reverse KL divergences as measures for fitting flows to target distributions. The paper also covers alternative divergences, such as f-divergences and integral probability metrics, and their applications in training flows.
The paper provides a historical overview of normalizing flows, tracing their development from early whitening transformations to modern architectures. It then focuses on constructing flows using finite compositions, discussing autoregressive flows and other methods. Autoregressive flows are highlighted as a popular class, with transformers and conditioners used to define the flow's behavior. The paper also covers various types of transformers, including affine, combination-based, integration-based, and spline-based, each with its own advantages and limitations.
The review concludes by emphasizing the importance of understanding the principles of flow design, including the balance between expressive power and computational efficiency, and the role of flows in a broader context of probabilistic modeling and inference.Normalizing flows provide a general mechanism for defining expressive probability distributions by transforming a simple base distribution through a series of invertible, differentiable transformations. This paper reviews normalizing flows, emphasizing their role in probabilistic modeling and inference. It discusses the fundamental principles of flow design, including expressive power, computational trade-offs, and their relationship to general probability transformations. The review also covers applications in generative modeling, approximate inference, and supervised learning.
The paper begins by defining normalizing flows as a method to construct flexible probability distributions over continuous random variables. It explains how flows work by transforming a base distribution through a series of invertible transformations, allowing for complex, multi-modal distributions. The key property of flows is that they are invertible and differentiable, enabling the computation of densities via the Jacobian determinant.
The paper then explores the expressive power of flow-based models, showing that they can represent any distribution under certain conditions. It discusses the use of flows for modeling and inference, including the forward and reverse KL divergences as measures for fitting flows to target distributions. The paper also covers alternative divergences, such as f-divergences and integral probability metrics, and their applications in training flows.
The paper provides a historical overview of normalizing flows, tracing their development from early whitening transformations to modern architectures. It then focuses on constructing flows using finite compositions, discussing autoregressive flows and other methods. Autoregressive flows are highlighted as a popular class, with transformers and conditioners used to define the flow's behavior. The paper also covers various types of transformers, including affine, combination-based, integration-based, and spline-based, each with its own advantages and limitations.
The review concludes by emphasizing the importance of understanding the principles of flow design, including the balance between expressive power and computational efficiency, and the role of flows in a broader context of probabilistic modeling and inference.