GS2Mesh: Surface Reconstruction from Gaussian Splatting via Novel Stereo Views

GS2Mesh: Surface Reconstruction from Gaussian Splatting via Novel Stereo Views

17 Jul 2024 | Yaniv Wolf*, Amit Bracha*, and Ron Kimmel
**GS2Mesh: Surface Reconstruction from Gaussian Splatting via Novel Stereo Views** **Authors:** Yaniv Wolf, Amit Bracha, Ron Kimmel **Institution:** Technion - Israel Institute of Technology, Haifa, Israel **GitHub:** https://gs2mesh.github.io **Abstract:** This paper addresses the challenge of extracting smooth and accurate 3D surfaces from noisy Gaussian splatting (3DGS) representations. Traditional methods often produce noisy and unrealistic surfaces due to the optimization process based on photometric loss. The proposed method leverages a pre-trained stereo-matching model to extract depth profiles from stereo-aligned image pairs generated by 3DGS. These depth profiles are then fused using the Truncated Signed Distance Function (TSDF) algorithm to create a smooth and geometrically consistent mesh. The method significantly reduces reconstruction time, achieving state-of-the-art results on benchmarks like Tanks and Temples and DTU, as well as in-the-wild scenes captured with smartphones. **Key Contributions:** 1. **Novel Pipeline:** Introduces a novel pipeline that uses a pre-trained stereo matching model to extract depth from 3DGS, improving surface reconstruction accuracy. 2. **Efficiency:** Significantly reduces reconstruction time compared to neural methods. 3. **State-of-the-Art Results:** Achieves superior results on various benchmarks and in-the-wild scenes. **Methods:** 1. **Scene Capture and Pose Estimation:** Utilizes COLMAP for scene capture and pose estimation. 2. **3DGS and Stereo-Aligned Novel View Rendering:** Uses 3DGS to render stereo-aligned image pairs. 3. **Stereo Depth Estimation:** Applys a stereo matching algorithm to extract depth profiles. 4. **Depth Fusion into Triangulated Surface:** Aggregates depth profiles using TSDF and Marching-Cubes meshing. **Experiments and Results:** - **DTU Dataset:** Achieves the best Chamfer Distance among splatting-based methods. - **Tanks and Temples Benchmark:** Outperforms SuGaR in F1 and precision. - **Mip-NeRF360 Dataset:** Shows comparable visual quality to neural reconstruction methods. - **In-the-Wild Scenes:** Demonstrates superior geometric consistency and smoothness. **Ablation Study:** - Compares the method with deep MVS models, showing improved performance. **Limitations:** - 3DGS can produce noisy Gaussians in under-covered areas. - Stereo matching struggles with transparent surfaces. - TSDF fusion is less effective for large scenes. **Conclusion:** The proposed method effectively bridges the gap between noisy 3DGS representations and smooth 3D meshes, offering improved accuracy and efficiency.**GS2Mesh: Surface Reconstruction from Gaussian Splatting via Novel Stereo Views** **Authors:** Yaniv Wolf, Amit Bracha, Ron Kimmel **Institution:** Technion - Israel Institute of Technology, Haifa, Israel **GitHub:** https://gs2mesh.github.io **Abstract:** This paper addresses the challenge of extracting smooth and accurate 3D surfaces from noisy Gaussian splatting (3DGS) representations. Traditional methods often produce noisy and unrealistic surfaces due to the optimization process based on photometric loss. The proposed method leverages a pre-trained stereo-matching model to extract depth profiles from stereo-aligned image pairs generated by 3DGS. These depth profiles are then fused using the Truncated Signed Distance Function (TSDF) algorithm to create a smooth and geometrically consistent mesh. The method significantly reduces reconstruction time, achieving state-of-the-art results on benchmarks like Tanks and Temples and DTU, as well as in-the-wild scenes captured with smartphones. **Key Contributions:** 1. **Novel Pipeline:** Introduces a novel pipeline that uses a pre-trained stereo matching model to extract depth from 3DGS, improving surface reconstruction accuracy. 2. **Efficiency:** Significantly reduces reconstruction time compared to neural methods. 3. **State-of-the-Art Results:** Achieves superior results on various benchmarks and in-the-wild scenes. **Methods:** 1. **Scene Capture and Pose Estimation:** Utilizes COLMAP for scene capture and pose estimation. 2. **3DGS and Stereo-Aligned Novel View Rendering:** Uses 3DGS to render stereo-aligned image pairs. 3. **Stereo Depth Estimation:** Applys a stereo matching algorithm to extract depth profiles. 4. **Depth Fusion into Triangulated Surface:** Aggregates depth profiles using TSDF and Marching-Cubes meshing. **Experiments and Results:** - **DTU Dataset:** Achieves the best Chamfer Distance among splatting-based methods. - **Tanks and Temples Benchmark:** Outperforms SuGaR in F1 and precision. - **Mip-NeRF360 Dataset:** Shows comparable visual quality to neural reconstruction methods. - **In-the-Wild Scenes:** Demonstrates superior geometric consistency and smoothness. **Ablation Study:** - Compares the method with deep MVS models, showing improved performance. **Limitations:** - 3DGS can produce noisy Gaussians in under-covered areas. - Stereo matching struggles with transparent surfaces. - TSDF fusion is less effective for large scenes. **Conclusion:** The proposed method effectively bridges the gap between noisy 3DGS representations and smooth 3D meshes, offering improved accuracy and efficiency.
Reach us at info@study.space
[slides and audio] Surface Reconstruction from Gaussian Splatting via Novel Stereo Views