TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering

TRIPS: Trilinear Point Splatting for Real-Time Radiance Field Rendering

26 Mar 2024 | Linus Franke, Darius Rückert, Laura Fink, and Marc Stamminger
TRIPS (Trilinear Point Splatting) is a real-time radiance field rendering method that combines the strengths of 3D Gaussian Splatting and ADOP. It rasterizes points into a screen-space image pyramid, with the selection of the pyramid layer determined by the projected point size. This allows for the rendering of large points using a single trilinear write. A lightweight neural network is then used to reconstruct a hole-free image with detail beyond splat resolution. The render pipeline is entirely differentiable, enabling automatic optimization of point sizes and positions. TRIPS surpasses existing state-of-the-art methods in terms of rendering quality while maintaining a real-time frame rate of 60 frames per second on readily available hardware. It performs well in challenging scenarios, such as scenes with intricate geometry, expansive landscapes, and auto-exposed footage. The project page is located at: https://lfranke.github.io/trips. The method uses a trilinear point renderer that splats points bilinearly onto the screen space position and linearly to two resolution layers, determined by the projected point size. The neural image is the output of the render function, which includes camera intrinsics, extrinsic pose, point positions, environment map, world space size, neural point descriptors, and transparency. TRIPS employs a differentiable trilinear point splatting technique, which allows for the optimization of point sizes to fill large holes in the scene. It also uses a multi-resolution alpha blending approach to handle overlapping points and ensure smooth rendering. A neural network is used to reconstruct the image, with a spherical harmonics module and tone mapping to model view-dependent effects and camera-specific parameters. The method is evaluated on several datasets, including Tanks&Temples and MipNeRF-360, and shows superior performance in terms of LPIPS, PSNR, and SSIM scores. It is robust to extreme input conditions and can handle large point clouds efficiently. The method is also efficient in rendering large amounts of points, with a real-time frame rate of 15ms for the largest scene with over 70M points. TRIPS is a robust real-time point-based radiance field rendering pipeline that enables the rendering of highly detailed scenes and the filling of large gaps while maintaining a real-time frame rate on commonly available hardware. It achieves high rendering quality even in challenging scenarios and uses a simple neural reconstruction network for real-time rendering performance. An open source implementation is available at: https://github.com/lfranke/TRIPS.TRIPS (Trilinear Point Splatting) is a real-time radiance field rendering method that combines the strengths of 3D Gaussian Splatting and ADOP. It rasterizes points into a screen-space image pyramid, with the selection of the pyramid layer determined by the projected point size. This allows for the rendering of large points using a single trilinear write. A lightweight neural network is then used to reconstruct a hole-free image with detail beyond splat resolution. The render pipeline is entirely differentiable, enabling automatic optimization of point sizes and positions. TRIPS surpasses existing state-of-the-art methods in terms of rendering quality while maintaining a real-time frame rate of 60 frames per second on readily available hardware. It performs well in challenging scenarios, such as scenes with intricate geometry, expansive landscapes, and auto-exposed footage. The project page is located at: https://lfranke.github.io/trips. The method uses a trilinear point renderer that splats points bilinearly onto the screen space position and linearly to two resolution layers, determined by the projected point size. The neural image is the output of the render function, which includes camera intrinsics, extrinsic pose, point positions, environment map, world space size, neural point descriptors, and transparency. TRIPS employs a differentiable trilinear point splatting technique, which allows for the optimization of point sizes to fill large holes in the scene. It also uses a multi-resolution alpha blending approach to handle overlapping points and ensure smooth rendering. A neural network is used to reconstruct the image, with a spherical harmonics module and tone mapping to model view-dependent effects and camera-specific parameters. The method is evaluated on several datasets, including Tanks&Temples and MipNeRF-360, and shows superior performance in terms of LPIPS, PSNR, and SSIM scores. It is robust to extreme input conditions and can handle large point clouds efficiently. The method is also efficient in rendering large amounts of points, with a real-time frame rate of 15ms for the largest scene with over 70M points. TRIPS is a robust real-time point-based radiance field rendering pipeline that enables the rendering of highly detailed scenes and the filling of large gaps while maintaining a real-time frame rate on commonly available hardware. It achieves high rendering quality even in challenging scenarios and uses a simple neural reconstruction network for real-time rendering performance. An open source implementation is available at: https://github.com/lfranke/TRIPS.
Reach us at info@study.space