PointConv: Deep Convolutional Networks on 3D Point Clouds

PointConv: Deep Convolutional Networks on 3D Point Clouds

9 Nov 2020 | Wenxuan Wu, Zhongang Qi, Li Fuxin
PointConv is a novel convolution operation for 3D point clouds that enables deep convolutional networks on irregular and unordered point data. Unlike traditional 2D CNNs, which operate on regular grids, PointConv treats convolution kernels as nonlinear functions of local 3D point coordinates, incorporating weight and density functions. The weight functions are learned using multi-layer perceptrons (MLPs), while density functions are estimated via kernel density estimation. A key contribution is a reformulation that allows efficient computation of weight functions, enabling the network to scale and improve performance. PointConv is permutation- and translation-invariant, allowing it to compute convolution on any point set in 3D space. It can also be used as a deconvolution operator to propagate features from subsampled point clouds back to their original resolution. Experiments on ModelNet40, ShapeNet, and ScanNet show that PointConv-based networks achieve state-of-the-art results in 3D point cloud segmentation. Additionally, experiments on CIFAR-10 demonstrate that PointConv can match the performance of 2D CNNs on similar structures. The paper also introduces an efficient implementation of PointConv using a reformulation that reduces memory consumption by changing the summation order. This approach allows PointConv to be implemented efficiently on modern CNN levels. The work is compared with other methods, including PointNet, PointNet++, and SPLATNet, and shows that PointConv achieves competitive performance. The paper also includes ablation studies and visualizations of learned filters, demonstrating the effectiveness of PointConv in 3D point cloud processing.PointConv is a novel convolution operation for 3D point clouds that enables deep convolutional networks on irregular and unordered point data. Unlike traditional 2D CNNs, which operate on regular grids, PointConv treats convolution kernels as nonlinear functions of local 3D point coordinates, incorporating weight and density functions. The weight functions are learned using multi-layer perceptrons (MLPs), while density functions are estimated via kernel density estimation. A key contribution is a reformulation that allows efficient computation of weight functions, enabling the network to scale and improve performance. PointConv is permutation- and translation-invariant, allowing it to compute convolution on any point set in 3D space. It can also be used as a deconvolution operator to propagate features from subsampled point clouds back to their original resolution. Experiments on ModelNet40, ShapeNet, and ScanNet show that PointConv-based networks achieve state-of-the-art results in 3D point cloud segmentation. Additionally, experiments on CIFAR-10 demonstrate that PointConv can match the performance of 2D CNNs on similar structures. The paper also introduces an efficient implementation of PointConv using a reformulation that reduces memory consumption by changing the summation order. This approach allows PointConv to be implemented efficiently on modern CNN levels. The work is compared with other methods, including PointNet, PointNet++, and SPLATNet, and shows that PointConv achieves competitive performance. The paper also includes ablation studies and visualizations of learned filters, demonstrating the effectiveness of PointConv in 3D point cloud processing.
Reach us at info@study.space
Understanding PointConv%3A Deep Convolutional Networks on 3D Point Clouds