Pyro: Deep Universal Probabilistic Programming

Pyro: Deep Universal Probabilistic Programming

Submitted 06/18; Published XX/XX | Eli Bingham, Jonathan P. Chen, Martin Jankowiak, Fritz Obermeyer, Neeraj Pradhan, Theofanis Karaletsos, Rohit Singh, Paul Szerlip, Paul Horsfall, Noah D. Goodman
Pyro is a probabilistic programming language (PPL) built on Python, designed for developing advanced probabilistic models in AI research. It leverages PyTorch, a GPU-accelerated deep learning framework, to implement stochastic variational inference algorithms and probability distributions, enabling efficient handling of large datasets and high-dimensional models. Pyro uses Poutine, a library of composable building blocks, to modify the behavior of probabilistic programs and support complex, model-specific algorithmic behavior. Pyro is designed with four key principles: expressiveness, scalability, flexibility, and minimalism. It allows concise description of models with data-dependent internal control flow or latent variables, and it scales to large datasets and non-conjugate models using GPU-accelerated tensor math and reverse-mode automatic differentiation. It also enables flexible inference algorithms and separates concerns between model, inference, and runtime implementations. Pyro aims to be minimal by sharing syntax and semantics with existing languages and integrating with other tools. Pyro implements several probabilistic inference algorithms, including the No U-turn Sampler and gradient-based stochastic variational inference (SVI), which uses stochastic gradient descent to optimize Monte Carlo estimates of divergence measures between approximate and true posterior distributions. Pyro's flexibility allows it to use arbitrary Pyro programs as approximate posteriors or proposal distributions, and it supports custom inference algorithms. Pyro is open-source, with code available under an MIT license and hosted on GitHub. It has been developed by the authors and a community of open-source contributors. Pyro's design decisions are not universally applicable, and other systems make different tradeoffs to achieve different goals. Pyro has been tested with several state-of-the-art models, including the variational autoencoder (VAE) and the Deep Markov Model (DMM). The VAE and DMM were implemented and tested to demonstrate Pyro's scalability and flexibility. The results show that Pyro's abstractions do not reduce scalability, and it can replicate complex models with expressive approximate posteriors using autoregressive flows. Pyro's modular design allows for easy extension and customization of models and inference procedures.Pyro is a probabilistic programming language (PPL) built on Python, designed for developing advanced probabilistic models in AI research. It leverages PyTorch, a GPU-accelerated deep learning framework, to implement stochastic variational inference algorithms and probability distributions, enabling efficient handling of large datasets and high-dimensional models. Pyro uses Poutine, a library of composable building blocks, to modify the behavior of probabilistic programs and support complex, model-specific algorithmic behavior. Pyro is designed with four key principles: expressiveness, scalability, flexibility, and minimalism. It allows concise description of models with data-dependent internal control flow or latent variables, and it scales to large datasets and non-conjugate models using GPU-accelerated tensor math and reverse-mode automatic differentiation. It also enables flexible inference algorithms and separates concerns between model, inference, and runtime implementations. Pyro aims to be minimal by sharing syntax and semantics with existing languages and integrating with other tools. Pyro implements several probabilistic inference algorithms, including the No U-turn Sampler and gradient-based stochastic variational inference (SVI), which uses stochastic gradient descent to optimize Monte Carlo estimates of divergence measures between approximate and true posterior distributions. Pyro's flexibility allows it to use arbitrary Pyro programs as approximate posteriors or proposal distributions, and it supports custom inference algorithms. Pyro is open-source, with code available under an MIT license and hosted on GitHub. It has been developed by the authors and a community of open-source contributors. Pyro's design decisions are not universally applicable, and other systems make different tradeoffs to achieve different goals. Pyro has been tested with several state-of-the-art models, including the variational autoencoder (VAE) and the Deep Markov Model (DMM). The VAE and DMM were implemented and tested to demonstrate Pyro's scalability and flexibility. The results show that Pyro's abstractions do not reduce scalability, and it can replicate complex models with expressive approximate posteriors using autoregressive flows. Pyro's modular design allows for easy extension and customization of models and inference procedures.
Reach us at info@study.space