10 Apr 2015 | Laurent Dinh David Krueger Yoshua Bengio
NICE (Non-linear Independent Component Estimation) is a deep learning framework for modeling complex high-dimensional densities. It learns a non-linear deterministic transformation that maps data to a latent space where the transformed data follows a factorized distribution, resulting in independent latent variables. The transformation is parametrized to allow easy computation of the Jacobian determinant and its inverse, enabling efficient training and sampling. The training criterion is the exact log-likelihood, which is tractable. NICE can be used for generative modeling and inpainting. The model uses a triangular structure with coupling layers to achieve tractable Jacobian determinants and invertible transformations. The architecture includes diagonal scaling layers to allow for varying importance of different dimensions. The prior distribution is factorial, enabling the model to learn meaningful structures in the data. NICE is compared to other generative models like variational auto-encoders and generative adversarial networks. Experiments on image datasets show that NICE achieves competitive log-likelihood results and can generate realistic samples. The model also performs well in inpainting tasks, although it is not specifically trained for this purpose. NICE provides efficient unbiased ancestral sampling and is a flexible framework for learning highly non-linear bijective transformations.NICE (Non-linear Independent Component Estimation) is a deep learning framework for modeling complex high-dimensional densities. It learns a non-linear deterministic transformation that maps data to a latent space where the transformed data follows a factorized distribution, resulting in independent latent variables. The transformation is parametrized to allow easy computation of the Jacobian determinant and its inverse, enabling efficient training and sampling. The training criterion is the exact log-likelihood, which is tractable. NICE can be used for generative modeling and inpainting. The model uses a triangular structure with coupling layers to achieve tractable Jacobian determinants and invertible transformations. The architecture includes diagonal scaling layers to allow for varying importance of different dimensions. The prior distribution is factorial, enabling the model to learn meaningful structures in the data. NICE is compared to other generative models like variational auto-encoders and generative adversarial networks. Experiments on image datasets show that NICE achieves competitive log-likelihood results and can generate realistic samples. The model also performs well in inpainting tasks, although it is not specifically trained for this purpose. NICE provides efficient unbiased ancestral sampling and is a flexible framework for learning highly non-linear bijective transformations.