15 Mar 2024 | Qianjiang Hu, Zhimin Zhang, and Wei Hu
RangeLDM is a novel approach for generating high-quality range-view LiDAR point clouds at a fast speed. The method addresses the limitations of existing deep generative models by correcting the range-view data distribution through Hough voting, compressing range images into a latent space using a variational autoencoder (VAE), and enhancing expressivity with a diffusion model. A range-guided discriminator is introduced to preserve 3D structural fidelity. Experimental results on the KITTI-360 and nuScenes datasets demonstrate that RangeLDM outperforms state-of-the-art methods in both generation quality and speed, achieving superior visual quality and faster sampling rates. The method also supports conditional generation tasks such as LiDAR point cloud upsampling and inpainting, further showcasing its versatility and effectiveness.RangeLDM is a novel approach for generating high-quality range-view LiDAR point clouds at a fast speed. The method addresses the limitations of existing deep generative models by correcting the range-view data distribution through Hough voting, compressing range images into a latent space using a variational autoencoder (VAE), and enhancing expressivity with a diffusion model. A range-guided discriminator is introduced to preserve 3D structural fidelity. Experimental results on the KITTI-360 and nuScenes datasets demonstrate that RangeLDM outperforms state-of-the-art methods in both generation quality and speed, achieving superior visual quality and faster sampling rates. The method also supports conditional generation tasks such as LiDAR point cloud upsampling and inpainting, further showcasing its versatility and effectiveness.