Frustum PointNets for 3D Object Detection from RGB-D Data

Frustum PointNets for 3D Object Detection from RGB-D Data

13 Apr 2018 | Charles R. Qi1*, Wei Liu2 Chenxia Wu2 Hao Su3 Leonidas J. Guibas1
This paper presents Frustum PointNets, a novel framework for 3D object detection from RGB-D data in both indoor and outdoor scenes. Unlike previous methods that often convert 3D point clouds to images or volumetric grids, Frustum PointNets directly operates on raw point clouds, leveraging both mature 2D object detectors and advanced 3D deep learning techniques. The key challenge addressed is efficient localization of objects in large-scale scenes, which is achieved by using 2D object detectors to propose 2D regions in RGB images, which are then extruded to 3D frustums. Within each frustum, 3D instance segmentation and amodal 3D bounding box regression are performed using variants of PointNet. The method is evaluated on the KITTI and SUN RGB-D datasets, outperforming state-of-the-art methods by significant margins while maintaining real-time capabilities. The contributions of the work include a novel framework, effective training methods, and extensive evaluations to validate design choices and understand the strengths and limitations of the method.This paper presents Frustum PointNets, a novel framework for 3D object detection from RGB-D data in both indoor and outdoor scenes. Unlike previous methods that often convert 3D point clouds to images or volumetric grids, Frustum PointNets directly operates on raw point clouds, leveraging both mature 2D object detectors and advanced 3D deep learning techniques. The key challenge addressed is efficient localization of objects in large-scale scenes, which is achieved by using 2D object detectors to propose 2D regions in RGB images, which are then extruded to 3D frustums. Within each frustum, 3D instance segmentation and amodal 3D bounding box regression are performed using variants of PointNet. The method is evaluated on the KITTI and SUN RGB-D datasets, outperforming state-of-the-art methods by significant margins while maintaining real-time capabilities. The contributions of the work include a novel framework, effective training methods, and extensive evaluations to validate design choices and understand the strengths and limitations of the method.
Reach us at info@study.space