NERF++: ANALYZING AND IMPROVING NEURAL RADIANCE FIELDS

NERF++: ANALYZING AND IMPROVING NEURAL RADIANCE FIELDS

21 Oct 2020 | Kai Zhang, Gernot Riegler, Noah Snavely, Vladlen Koltun
NERF++: Analyzing and Improving Neural Radiance Fields NeRF achieves impressive view synthesis results for various capture settings, including 360° capture of bounded scenes and forward-facing capture of bounded and unbounded scenes. NeRF fits multilayer perceptrons (MLPs) representing view-invariant opacity and view-dependent color volumes to a set of training images, and samples novel views based on volume rendering techniques. This technical report analyzes potential failure modes in NeRF and why it avoids these failure modes. It also presents a novel spatial parameterization scheme called inverted sphere parameterization that allows NeRF to work on a new class of captures of unbounded scenes. NeRF's success in avoiding the shape-radiance ambiguity is attributed to its specific MLP structure, which implicitly encodes a smooth BRDF prior on surface reflectance. This structure helps avoid degenerate solutions that could fail to generalize to novel test views. Additionally, NeRF++ addresses a spatial parameterization issue in challenging scenarios involving 360° captures around objects within unbounded environments. It separately models foreground and background, addressing the challenge of modeling unbounded 3D. The inverted sphere parameterization allows NeRF++ to handle large-scale unbounded scenes by partitioning the scene space into two volumes: an inner unit sphere and an outer volume represented by an inverted sphere. This parameterization improves numeric stability and respects the fact that farther objects should get less resolution. NeRF++ significantly outperforms NeRF in challenging scenarios involving 360° captures of objects within large-scale unbounded scenes, as shown in experiments on real-world datasets such as Tanks and Temples and the Light Field dataset. However, there remain open challenges, including the time and memory intensity of training and testing NeRF and NeRF++, as well as the impact of small camera calibration errors and photometric effects on photorealistic synthesis.NERF++: Analyzing and Improving Neural Radiance Fields NeRF achieves impressive view synthesis results for various capture settings, including 360° capture of bounded scenes and forward-facing capture of bounded and unbounded scenes. NeRF fits multilayer perceptrons (MLPs) representing view-invariant opacity and view-dependent color volumes to a set of training images, and samples novel views based on volume rendering techniques. This technical report analyzes potential failure modes in NeRF and why it avoids these failure modes. It also presents a novel spatial parameterization scheme called inverted sphere parameterization that allows NeRF to work on a new class of captures of unbounded scenes. NeRF's success in avoiding the shape-radiance ambiguity is attributed to its specific MLP structure, which implicitly encodes a smooth BRDF prior on surface reflectance. This structure helps avoid degenerate solutions that could fail to generalize to novel test views. Additionally, NeRF++ addresses a spatial parameterization issue in challenging scenarios involving 360° captures around objects within unbounded environments. It separately models foreground and background, addressing the challenge of modeling unbounded 3D. The inverted sphere parameterization allows NeRF++ to handle large-scale unbounded scenes by partitioning the scene space into two volumes: an inner unit sphere and an outer volume represented by an inverted sphere. This parameterization improves numeric stability and respects the fact that farther objects should get less resolution. NeRF++ significantly outperforms NeRF in challenging scenarios involving 360° captures of objects within large-scale unbounded scenes, as shown in experiments on real-world datasets such as Tanks and Temples and the Light Field dataset. However, there remain open challenges, including the time and memory intensity of training and testing NeRF and NeRF++, as well as the impact of small camera calibration errors and photometric effects on photorealistic synthesis.
Reach us at info@study.space
[slides and audio] NeRF%2B%2B%3A Analyzing and Improving Neural Radiance Fields