18 Jun 2020 | Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, Ren Ng
The paper "Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains" by Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng explores how Fourier feature mapping can enhance the performance of multilayer perceptrons (MLPs) in low-dimensional domains. The authors leverage neural tangent kernel (NTK) theory to show that standard MLPs struggle to learn high-frequency functions due to their spectral bias, which results in a rapid frequency falloff in the NTK. To address this, they introduce Fourier feature mapping, which transforms the NTK into a stationary kernel with tunable bandwidth. This mapping allows the MLP to learn higher frequencies more effectively. The paper demonstrates that a random Fourier feature mapping with appropriately chosen scale significantly improves the performance of MLPs on various low-dimensional regression tasks in computer vision and graphics, such as image regression, 3D shape regression, and computed tomography (CT). The authors also provide theoretical and experimental evidence that the scale of the Fourier feature distribution matters more than its specific shape, and they propose a simple strategy for selecting problem-specific Fourier features. Overall, the paper contributes to the understanding of spectral bias in deep networks and offers a practical solution to improve their performance in low-dimensional tasks.The paper "Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains" by Matthew Tancik, Pratul P. Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, Nithin Raghavan, Utkarsh Singhal, Ravi Ramamoorthi, Jonathan T. Barron, and Ren Ng explores how Fourier feature mapping can enhance the performance of multilayer perceptrons (MLPs) in low-dimensional domains. The authors leverage neural tangent kernel (NTK) theory to show that standard MLPs struggle to learn high-frequency functions due to their spectral bias, which results in a rapid frequency falloff in the NTK. To address this, they introduce Fourier feature mapping, which transforms the NTK into a stationary kernel with tunable bandwidth. This mapping allows the MLP to learn higher frequencies more effectively. The paper demonstrates that a random Fourier feature mapping with appropriately chosen scale significantly improves the performance of MLPs on various low-dimensional regression tasks in computer vision and graphics, such as image regression, 3D shape regression, and computed tomography (CT). The authors also provide theoretical and experimental evidence that the scale of the Fourier feature distribution matters more than its specific shape, and they propose a simple strategy for selecting problem-specific Fourier features. Overall, the paper contributes to the understanding of spectral bias in deep networks and offers a practical solution to improve their performance in low-dimensional tasks.