15 Mar 2024 | Qianjiang Hu, Zhimin Zhang, and Wei Hu
RangeLDM is a novel approach for rapidly generating high-quality range-view LiDAR point clouds using latent diffusion models (LDMs). The method addresses the challenges of generating realistic LiDAR data with high computational efficiency. By correcting range-view data distribution through Hough voting, RangeLDM ensures accurate projection from point clouds to range images, which is critical for generative learning. The range images are then compressed into a latent space using a variational autoencoder (VAE), and a diffusion model is applied to enhance expressivity. A range-guided discriminator is introduced to preserve 3D structural fidelity during generation. Experimental results on the KITTI-360 and nuScenes datasets show that RangeLDM outperforms state-of-the-art methods in both generation quality and speed. The method is applicable to both unconditional and conditional generation tasks, including LiDAR point cloud upsampling and inpainting. RangeLDM achieves high-quality results with fast generation speed, making it suitable for real-time applications in autonomous driving. The model's effectiveness is demonstrated through quantitative and qualitative evaluations, showing its ability to generate realistic LiDAR point clouds with high fidelity and efficiency.RangeLDM is a novel approach for rapidly generating high-quality range-view LiDAR point clouds using latent diffusion models (LDMs). The method addresses the challenges of generating realistic LiDAR data with high computational efficiency. By correcting range-view data distribution through Hough voting, RangeLDM ensures accurate projection from point clouds to range images, which is critical for generative learning. The range images are then compressed into a latent space using a variational autoencoder (VAE), and a diffusion model is applied to enhance expressivity. A range-guided discriminator is introduced to preserve 3D structural fidelity during generation. Experimental results on the KITTI-360 and nuScenes datasets show that RangeLDM outperforms state-of-the-art methods in both generation quality and speed. The method is applicable to both unconditional and conditional generation tasks, including LiDAR point cloud upsampling and inpainting. RangeLDM achieves high-quality results with fast generation speed, making it suitable for real-time applications in autonomous driving. The model's effectiveness is demonstrated through quantitative and qualitative evaluations, showing its ability to generate realistic LiDAR point clouds with high fidelity and efficiency.