GS2Mesh: Surface Reconstruction from Gaussian Splatting via Novel Stereo Views

GS2Mesh: Surface Reconstruction from Gaussian Splatting via Novel Stereo Views

17 Jul 2024 | Yaniv Wolf*, Amit Bracha*, and Ron Kimmel
GS2Mesh is a novel method for surface reconstruction from Gaussian Splatting (3DGS) by leveraging stereo views. The method addresses the challenge of extracting accurate geometry from 3DGS, which is optimized for photometric loss rather than geometric consistency. Instead of directly extracting geometry from Gaussian properties, GS2Mesh uses a pre-trained stereo matching model to generate depth profiles from stereo-aligned image pairs. These profiles are then fused into a single mesh using the Truncated Signed Distance Function (TSDF), resulting in a smooth, geometrically consistent surface. The method requires only a small overhead compared to the 3DGS optimization process, making it significantly faster than neural surface reconstruction methods. GS2Mesh was tested on various datasets, including the Tanks and Temples (TnT) and DTU benchmarks, achieving state-of-the-art results. It also demonstrated superior performance on in-the-wild scenes captured with smartphones, showing better geometric consistency and smoothness compared to existing methods like SuGaR. The method's use of a pre-trained stereo model as a geometric prior allows it to extract depth from 3DGS without relying on noisy Gaussian locations, leading to more accurate and realistic surface reconstructions. The method involves three main steps: scene capture and pose estimation, stereo-aligned novel view rendering, and stereo depth estimation. The stereo depth estimation step uses a pre-trained stereo matching model to generate depth profiles, which are then fused into a mesh using TSDF. The method also incorporates an occlusion mask to improve the reliability of the stereo model's output in areas where the scene is only visible in one of the cameras. GS2Mesh outperforms other methods in terms of reconstruction accuracy and efficiency, achieving high-quality results with significantly less computation time. The method is particularly effective for scenes with complex geometries and textures, and it can handle large scenes with a high degree of detail. However, the method has limitations, such as struggles with transparent surfaces and scalability issues for very large scenes. Despite these limitations, GS2Mesh provides a robust and efficient approach to surface reconstruction from Gaussian Splatting, offering improved accuracy and performance compared to existing methods.GS2Mesh is a novel method for surface reconstruction from Gaussian Splatting (3DGS) by leveraging stereo views. The method addresses the challenge of extracting accurate geometry from 3DGS, which is optimized for photometric loss rather than geometric consistency. Instead of directly extracting geometry from Gaussian properties, GS2Mesh uses a pre-trained stereo matching model to generate depth profiles from stereo-aligned image pairs. These profiles are then fused into a single mesh using the Truncated Signed Distance Function (TSDF), resulting in a smooth, geometrically consistent surface. The method requires only a small overhead compared to the 3DGS optimization process, making it significantly faster than neural surface reconstruction methods. GS2Mesh was tested on various datasets, including the Tanks and Temples (TnT) and DTU benchmarks, achieving state-of-the-art results. It also demonstrated superior performance on in-the-wild scenes captured with smartphones, showing better geometric consistency and smoothness compared to existing methods like SuGaR. The method's use of a pre-trained stereo model as a geometric prior allows it to extract depth from 3DGS without relying on noisy Gaussian locations, leading to more accurate and realistic surface reconstructions. The method involves three main steps: scene capture and pose estimation, stereo-aligned novel view rendering, and stereo depth estimation. The stereo depth estimation step uses a pre-trained stereo matching model to generate depth profiles, which are then fused into a mesh using TSDF. The method also incorporates an occlusion mask to improve the reliability of the stereo model's output in areas where the scene is only visible in one of the cameras. GS2Mesh outperforms other methods in terms of reconstruction accuracy and efficiency, achieving high-quality results with significantly less computation time. The method is particularly effective for scenes with complex geometries and textures, and it can handle large scenes with a high degree of detail. However, the method has limitations, such as struggles with transparent surfaces and scalability issues for very large scenes. Despite these limitations, GS2Mesh provides a robust and efficient approach to surface reconstruction from Gaussian Splatting, offering improved accuracy and performance compared to existing methods.
Reach us at info@study.space
[slides] Surface Reconstruction from Gaussian Splatting via Novel Stereo Views | StudySpace