JANUARY 2022 | VOL. 65 | NO. 1 | Ben Mildenhall, Pratul P. Srinivasan, Matthew Tancik, Jonathan T. Barron, Ravi Ramamoorthi, and Ren Ng
The paper "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis" introduces a novel method for synthesizing novel views of complex scenes using a continuous 5D neural radiance field (NeRF). The method represents a scene as a deep neural network that outputs the volume density and view-dependent emitted radiance at any spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$. The network is optimized using a sparse set of input views, with the goal of reproducing high-resolution input views while maintaining memory efficiency. The authors address the challenge of representing complex geometry and appearance by using positional encoding to enable the network to handle higher-frequency functions. The method is evaluated on synthetic and real-world datasets, demonstrating superior performance in terms of photorealism and detail preservation compared to previous methods. The paper also discusses the practical trade-offs between time and space efficiency, highlighting the advantages of the proposed method in terms of storage requirements.The paper "NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis" introduces a novel method for synthesizing novel views of complex scenes using a continuous 5D neural radiance field (NeRF). The method represents a scene as a deep neural network that outputs the volume density and view-dependent emitted radiance at any spatial location $(x, y, z)$ and viewing direction $(\theta, \phi)$. The network is optimized using a sparse set of input views, with the goal of reproducing high-resolution input views while maintaining memory efficiency. The authors address the challenge of representing complex geometry and appearance by using positional encoding to enable the network to handle higher-frequency functions. The method is evaluated on synthetic and real-world datasets, demonstrating superior performance in terms of photorealism and detail preservation compared to previous methods. The paper also discusses the practical trade-offs between time and space efficiency, highlighting the advantages of the proposed method in terms of storage requirements.