Implicit Neural Representations with Periodic Activation Functions

Implicit Neural Representations with Periodic Activation Functions

17 Jun 2020 | Vincent Sitzmann*, Julien N. P. Martel*, Alexander W. Bergman, David B. Lindell, Gordon Wetzstein
Implicit Neural Representations with Periodic Activation Functions Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or SIRENS, are ideally suited for representing complex natural signals and their derivatives. We analyze SIREN activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how SIRENS can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine SIRENS with hypernetworks to learn priors over the space of SIREN functions. We propose SIREN, a simple neural network architecture for implicit neural representations that uses the sine as a periodic activation function. We demonstrate that this approach is not only capable of representing details in the signals better than ReLU-MLPs, or positional encoding strategies proposed in concurrent work, but that these properties also uniquely apply to the derivatives, which is critical for many applications we explore in this paper. Our contributions include: a continuous implicit neural representation using periodic activation functions that fits complicated signals, such as natural images and 3D shapes, and their derivatives robustly; an initialization scheme for training these representations and validation that distributions of these representations can be learned using hypernetworks; and demonstration of applications in: image, video, and audio representation; 3D shape reconstruction; solving first-order differential equations that aim at estimating a signal by supervising only with its gradients; and solving second-order differential equations. We show that SIRENs can be initialized with some control over the distribution of activations, allowing us to create deep architectures. Furthermore, SIRENs converge significantly faster than baseline architectures, fitting, for instance, a single image in a few hundred iterations, taking a few seconds on a modern GPU, while featuring higher image fidelity. We demonstrate that SIRENs can be used to solve challenging boundary value problems, such as the Poisson equation, the Eikonal equation, the Helmholtz equation, and the wave equation. We also show how SIRENs can be combined with hypernetworks to learn priors over the space of SIREN functions. All code and data will be made publicly available.Implicit Neural Representations with Periodic Activation Functions Implicitly defined, continuous, differentiable signal representations parameterized by neural networks have emerged as a powerful paradigm, offering many possible benefits over conventional representations. However, current network architectures for such implicit neural representations are incapable of modeling signals with fine detail, and fail to represent a signal's spatial and temporal derivatives, despite the fact that these are essential to many physical signals defined implicitly as the solution to partial differential equations. We propose to leverage periodic activation functions for implicit neural representations and demonstrate that these networks, dubbed sinusoidal representation networks or SIRENS, are ideally suited for representing complex natural signals and their derivatives. We analyze SIREN activation statistics to propose a principled initialization scheme and demonstrate the representation of images, wavefields, video, sound, and their derivatives. Further, we show how SIRENS can be leveraged to solve challenging boundary value problems, such as particular Eikonal equations (yielding signed distance functions), the Poisson equation, and the Helmholtz and wave equations. Lastly, we combine SIRENS with hypernetworks to learn priors over the space of SIREN functions. We propose SIREN, a simple neural network architecture for implicit neural representations that uses the sine as a periodic activation function. We demonstrate that this approach is not only capable of representing details in the signals better than ReLU-MLPs, or positional encoding strategies proposed in concurrent work, but that these properties also uniquely apply to the derivatives, which is critical for many applications we explore in this paper. Our contributions include: a continuous implicit neural representation using periodic activation functions that fits complicated signals, such as natural images and 3D shapes, and their derivatives robustly; an initialization scheme for training these representations and validation that distributions of these representations can be learned using hypernetworks; and demonstration of applications in: image, video, and audio representation; 3D shape reconstruction; solving first-order differential equations that aim at estimating a signal by supervising only with its gradients; and solving second-order differential equations. We show that SIRENs can be initialized with some control over the distribution of activations, allowing us to create deep architectures. Furthermore, SIRENs converge significantly faster than baseline architectures, fitting, for instance, a single image in a few hundred iterations, taking a few seconds on a modern GPU, while featuring higher image fidelity. We demonstrate that SIRENs can be used to solve challenging boundary value problems, such as the Poisson equation, the Eikonal equation, the Helmholtz equation, and the wave equation. We also show how SIRENs can be combined with hypernetworks to learn priors over the space of SIREN functions. All code and data will be made publicly available.
Reach us at info@study.space
[slides and audio] Implicit Neural Representations with Periodic Activation Functions