VolSDF is a novel approach to neural volume rendering that improves geometry representation and reconstruction by modeling the volume density as a function of the signed distance function (SDF) to the scene's surface. This approach provides a useful inductive bias for geometry learning, allows bounding the opacity approximation error, and enables efficient disentanglement of shape and appearance in volume rendering. The density is defined using the cumulative distribution function (CDF) of the Laplace distribution applied to the SDF, which leads to a more accurate and efficient sampling of the viewing ray. This results in better geometry reconstruction and more accurate coupling of geometry and radiance. VolSDF outperforms existing methods such as NeRF and NeRF++ in surface reconstruction while avoiding the need for object masks. The method also enables switching of shape and appearance between scenes due to the disentanglement of the two. The approach is evaluated on challenging multiview 3D surface reconstruction tasks using the DTU and BlendedMVS datasets, demonstrating superior performance in terms of reconstruction accuracy and rendering quality. The method also provides a bound on the opacity approximation error, leading to more accurate and efficient volume rendering. The algorithm for sampling is designed to ensure accurate approximation of the opacity, leading to high fidelity rendering. The method is implemented using two multi-layer perceptrons (MLPs) to approximate the SDF and radiance field, with additional parameters to control the density and SDF. The training process involves minimizing a loss function that combines color and SDF losses, with the addition of positional encoding to improve the learning of high-frequency details. The method is evaluated on multiple datasets and compared to existing approaches, demonstrating its effectiveness in reconstructing 3D surfaces and rendering high-quality images. The approach has potential applications in various fields, including computer graphics, robotics, and virtual reality.VolSDF is a novel approach to neural volume rendering that improves geometry representation and reconstruction by modeling the volume density as a function of the signed distance function (SDF) to the scene's surface. This approach provides a useful inductive bias for geometry learning, allows bounding the opacity approximation error, and enables efficient disentanglement of shape and appearance in volume rendering. The density is defined using the cumulative distribution function (CDF) of the Laplace distribution applied to the SDF, which leads to a more accurate and efficient sampling of the viewing ray. This results in better geometry reconstruction and more accurate coupling of geometry and radiance. VolSDF outperforms existing methods such as NeRF and NeRF++ in surface reconstruction while avoiding the need for object masks. The method also enables switching of shape and appearance between scenes due to the disentanglement of the two. The approach is evaluated on challenging multiview 3D surface reconstruction tasks using the DTU and BlendedMVS datasets, demonstrating superior performance in terms of reconstruction accuracy and rendering quality. The method also provides a bound on the opacity approximation error, leading to more accurate and efficient volume rendering. The algorithm for sampling is designed to ensure accurate approximation of the opacity, leading to high fidelity rendering. The method is implemented using two multi-layer perceptrons (MLPs) to approximate the SDF and radiance field, with additional parameters to control the density and SDF. The training process involves minimizing a loss function that combines color and SDF losses, with the addition of positional encoding to improve the learning of high-frequency details. The method is evaluated on multiple datasets and compared to existing approaches, demonstrating its effectiveness in reconstructing 3D surfaces and rendering high-quality images. The approach has potential applications in various fields, including computer graphics, robotics, and virtual reality.