Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

18 Jun 2020 | Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, Ren Ng
This paper introduces Fourier features as a method to enable multilayer perceptrons (MLPs) to learn high-frequency functions in low-dimensional domains. The authors show that standard MLPs suffer from spectral bias, making it difficult to learn high-frequency content. By applying a Fourier feature mapping, they transform the neural tangent kernel (NTK) into a stationary kernel with tunable bandwidth, allowing MLPs to learn higher frequencies. The Fourier feature mapping involves transforming input coordinates using sinusoidal functions, which can be adjusted to control the range of frequencies the MLP can learn. The authors demonstrate that this approach significantly improves performance on various tasks, including image regression, 3D shape regression, and MRI reconstruction. They also show that the scale of the Fourier feature distribution is more important than its specific shape. The paper provides theoretical analysis and experimental results showing that Fourier features improve the performance of coordinate-based MLPs in low-dimensional regression tasks. The authors conclude that Fourier features offer a simple and effective strategy for improving the performance of MLPs in computer vision and graphics applications.This paper introduces Fourier features as a method to enable multilayer perceptrons (MLPs) to learn high-frequency functions in low-dimensional domains. The authors show that standard MLPs suffer from spectral bias, making it difficult to learn high-frequency content. By applying a Fourier feature mapping, they transform the neural tangent kernel (NTK) into a stationary kernel with tunable bandwidth, allowing MLPs to learn higher frequencies. The Fourier feature mapping involves transforming input coordinates using sinusoidal functions, which can be adjusted to control the range of frequencies the MLP can learn. The authors demonstrate that this approach significantly improves performance on various tasks, including image regression, 3D shape regression, and MRI reconstruction. They also show that the scale of the Fourier feature distribution is more important than its specific shape. The paper provides theoretical analysis and experimental results showing that Fourier features improve the performance of coordinate-based MLPs in low-dimensional regression tasks. The authors conclude that Fourier features offer a simple and effective strategy for improving the performance of MLPs in computer vision and graphics applications.
Reach us at info@study.space