Self-Normalizing Neural Networks

Self-Normalizing Neural Networks

2017 | Günter Klambauer, Thomas Unterthiner, Andreas Mayr, Sepp Hochreiter
The paper introduces Self-Normalizing Neural Networks (SNNs) to enable high-level abstract representations in deep learning. Unlike standard feed-forward neural networks (FNNs), which often require shallow architectures to avoid vanishing and exploding gradients, SNNs automatically converge to zero mean and unit variance through their activation function, "scaled exponential linear units" (SEUs). The authors prove that under the Banach fixed-point theorem, activations close to zero mean and unit variance will converge to these values even in the presence of noise and perturbations. This property allows SNNs to train deep networks, employ strong regularization schemes, and achieve robust learning. The paper compares SNNs to various FNN methods on 121 UCI datasets, drug discovery benchmarks, and astronomy tasks, demonstrating superior performance in most cases. The best-performing SNN architectures are typically very deep, contrasting with other FNNs. The implementation of SNNs is available at: github.com/bioinf-jku/SNNs.The paper introduces Self-Normalizing Neural Networks (SNNs) to enable high-level abstract representations in deep learning. Unlike standard feed-forward neural networks (FNNs), which often require shallow architectures to avoid vanishing and exploding gradients, SNNs automatically converge to zero mean and unit variance through their activation function, "scaled exponential linear units" (SEUs). The authors prove that under the Banach fixed-point theorem, activations close to zero mean and unit variance will converge to these values even in the presence of noise and perturbations. This property allows SNNs to train deep networks, employ strong regularization schemes, and achieve robust learning. The paper compares SNNs to various FNN methods on 121 UCI datasets, drug discovery benchmarks, and astronomy tasks, demonstrating superior performance in most cases. The best-performing SNN architectures are typically very deep, contrasting with other FNNs. The implementation of SNNs is available at: github.com/bioinf-jku/SNNs.
Reach us at info@study.space
[slides] Self-Normalizing Neural Networks | StudySpace