19 Feb 2024 | CHRISTIAN REISER, STEPHAN GARBIN, PRATUL P. SRINIVASAN, DOR VERBIN, RICHARD SZELISKI, BEN MILDENHALL, JONATHAN T. BARRON, PETER HEDMAN, ANDREAS GEIGER
This paper introduces a novel method for mesh-based view synthesis that effectively captures fine geometric details such as leaves, branches, and grass. The method uses a binary opacity grid representation instead of a continuous density field, allowing for sharp transitions between opacity values at surfaces. It also employs anti-aliasing by casting multiple rays per pixel and minimizes the binary entropy of opacity values to encourage binarization. A fusion-based meshing strategy is used to convert the binary opacity grid into a triangle mesh, which is then simplified and fitted with a lightweight view-dependent appearance model. The resulting compact meshes can be rendered in real-time on mobile devices and achieve higher view synthesis quality compared to existing mesh-based approaches. The method outperforms volume-based methods in reconstructing thin structures and is capable of real-time rendering on affordable smartphones. The paper also presents an ablation study showing the importance of each component in accurately reconstructing thin structures. The method is evaluated on the Mip-NeRF 360 dataset and compared with other baselines, demonstrating its effectiveness in terms of quality, rendering speed, memory consumption, and storage impact. The results show that the proposed method achieves higher view synthesis quality than existing mesh-based approaches and is more efficient in terms of memory and computation. The method is also compared with BakedSDF, a state-of-the-art method for real-time mesh-based view synthesis, and shows significant improvements in reconstructing thin structures. The paper also discusses limitations and future work, including the potential for further improvements through the use of UV mapping and smoothness regularization.This paper introduces a novel method for mesh-based view synthesis that effectively captures fine geometric details such as leaves, branches, and grass. The method uses a binary opacity grid representation instead of a continuous density field, allowing for sharp transitions between opacity values at surfaces. It also employs anti-aliasing by casting multiple rays per pixel and minimizes the binary entropy of opacity values to encourage binarization. A fusion-based meshing strategy is used to convert the binary opacity grid into a triangle mesh, which is then simplified and fitted with a lightweight view-dependent appearance model. The resulting compact meshes can be rendered in real-time on mobile devices and achieve higher view synthesis quality compared to existing mesh-based approaches. The method outperforms volume-based methods in reconstructing thin structures and is capable of real-time rendering on affordable smartphones. The paper also presents an ablation study showing the importance of each component in accurately reconstructing thin structures. The method is evaluated on the Mip-NeRF 360 dataset and compared with other baselines, demonstrating its effectiveness in terms of quality, rendering speed, memory consumption, and storage impact. The results show that the proposed method achieves higher view synthesis quality than existing mesh-based approaches and is more efficient in terms of memory and computation. The method is also compared with BakedSDF, a state-of-the-art method for real-time mesh-based view synthesis, and shows significant improvements in reconstructing thin structures. The paper also discusses limitations and future work, including the potential for further improvements through the use of UV mapping and smoothness regularization.