This paper introduces a novel approach to volume rendering using neural implicit surfaces, aiming to improve the geometry representation and reconstruction. The key contribution is the modeling of the volume density as a function of the geometry, specifically as a Laplace cumulative distribution function (CDF) applied to a signed distance function (SDF). This approach provides several benefits:
1. **Inductive Bias**: It provides a useful inductive bias for disentangling density and radiance fields, leading to more accurate geometry approximation.
2. **Opacity Approximation Error Bound**: It facilitates a bound on the opacity approximation error, enabling more precise sampling of viewing rays and accurate coupling of density and radiance fields.
3. **Efficient Disentanglement**: It allows for efficient unsupervised disentanglement of shape and appearance in volume rendering.
The method is evaluated on challenging datasets such as DTU and Blended-MVS, demonstrating superior performance in terms of geometry reconstruction quality compared to existing methods like NeRF and NeRF++. Additionally, the method successfully disentangles the geometry and appearance of objects, which is not possible with previous approaches that require object masks or suffer from extraneous surface parts. The paper also discusses limitations and future research directions, including the need for a proof of correctness for the sampling algorithm and the extension to more general density models.This paper introduces a novel approach to volume rendering using neural implicit surfaces, aiming to improve the geometry representation and reconstruction. The key contribution is the modeling of the volume density as a function of the geometry, specifically as a Laplace cumulative distribution function (CDF) applied to a signed distance function (SDF). This approach provides several benefits:
1. **Inductive Bias**: It provides a useful inductive bias for disentangling density and radiance fields, leading to more accurate geometry approximation.
2. **Opacity Approximation Error Bound**: It facilitates a bound on the opacity approximation error, enabling more precise sampling of viewing rays and accurate coupling of density and radiance fields.
3. **Efficient Disentanglement**: It allows for efficient unsupervised disentanglement of shape and appearance in volume rendering.
The method is evaluated on challenging datasets such as DTU and Blended-MVS, demonstrating superior performance in terms of geometry reconstruction quality compared to existing methods like NeRF and NeRF++. Additionally, the method successfully disentangles the geometry and appearance of objects, which is not possible with previous approaches that require object masks or suffer from extraneous surface parts. The paper also discusses limitations and future research directions, including the need for a proof of correctness for the sampling algorithm and the extension to more general density models.