This paper proposes a Fourier reparameterization method to improve implicit neural representation (INR). Implicit Neural Representation (INR) uses multi-layer perceptrons (MLPs) to parameterize continuous and differentiable functions. However, MLPs suffer from low-frequency bias, where they tend to learn low-frequency components more easily than high-frequency ones, leading to suboptimal performance in tasks like signal representation, 3D shape reconstruction, and novel view synthesis. To address this, the authors connect network training bias with reparameterization techniques and theoretically prove that appropriate reparameterization can alleviate the low-frequency bias by altering the magnitude of gradients from different frequencies.
The proposed Fourier reparameterization method reparameterizes the weights of MLPs using fixed Fourier bases, allowing the network to learn coefficient matrices that compose the weights. This approach enables the network to better capture high-frequency details and reduce artifacts, leading to improved performance on various INR tasks. The method is evaluated on different MLP architectures, including vanilla MLPs, MLPs with positional encoding, and MLPs with advanced activation functions. The results show that the Fourier reparameterization method significantly improves approximation accuracy and reduces low-frequency bias.
The method is also tested on real-world vision applications, including 2D color image approximation, shape representation using signed distance functions, and learning neural radiance fields for view synthesis. The results demonstrate that the Fourier reparameterization method achieves higher accuracy and better performance in these tasks compared to traditional MLPs. Additionally, ablation studies show that the choice of Fourier bases and sampling intervals significantly affect the performance of the method. The proposed method is compatible with existing techniques such as positional encoding and periodic activation functions, and it does not alter the input feature space or nonlinear activation functions. The code for the proposed method is available at https://github.com/LabShuHangGU/FR-INR.This paper proposes a Fourier reparameterization method to improve implicit neural representation (INR). Implicit Neural Representation (INR) uses multi-layer perceptrons (MLPs) to parameterize continuous and differentiable functions. However, MLPs suffer from low-frequency bias, where they tend to learn low-frequency components more easily than high-frequency ones, leading to suboptimal performance in tasks like signal representation, 3D shape reconstruction, and novel view synthesis. To address this, the authors connect network training bias with reparameterization techniques and theoretically prove that appropriate reparameterization can alleviate the low-frequency bias by altering the magnitude of gradients from different frequencies.
The proposed Fourier reparameterization method reparameterizes the weights of MLPs using fixed Fourier bases, allowing the network to learn coefficient matrices that compose the weights. This approach enables the network to better capture high-frequency details and reduce artifacts, leading to improved performance on various INR tasks. The method is evaluated on different MLP architectures, including vanilla MLPs, MLPs with positional encoding, and MLPs with advanced activation functions. The results show that the Fourier reparameterization method significantly improves approximation accuracy and reduces low-frequency bias.
The method is also tested on real-world vision applications, including 2D color image approximation, shape representation using signed distance functions, and learning neural radiance fields for view synthesis. The results demonstrate that the Fourier reparameterization method achieves higher accuracy and better performance in these tasks compared to traditional MLPs. Additionally, ablation studies show that the choice of Fourier bases and sampling intervals significantly affect the performance of the method. The proposed method is compatible with existing techniques such as positional encoding and periodic activation functions, and it does not alter the input feature space or nonlinear activation functions. The code for the proposed method is available at https://github.com/LabShuHangGU/FR-INR.