PointConv: Deep Convolutional Networks on 3D Point Clouds
This paper introduces PointConv, a novel convolution operation designed for 3D point clouds, which are irregular and unordered unlike regular dense grids in images. PointConv treats convolution kernels as nonlinear functions of the local coordinates of 3D points, combining weight and density functions. The weight functions are learned using multi-layer perceptrons (MLPs), while the density functions are estimated through kernel density estimation. A key contribution is an efficient reformulation that allows for scaling up the network and improving performance. PointConv can compute translation-invariant and permutation-invariant convolutions on any point set in 3D space and can also be used for deconvolution to propagate features from a subsampled point cloud back to its original resolution. Experiments on datasets like ModelNet40, ShapeNet, and ScanNet show that PointConv-based deep convolutional neural networks achieve state-of-the-art performance on challenging semantic segmentation tasks. Additionally, PointConv is shown to match the performance of 2D image convolutional networks on the CIFAR-10 dataset when converted into a point cloud. The paper also discusses related work and provides ablation studies to validate the effectiveness of PointConv's components.PointConv: Deep Convolutional Networks on 3D Point Clouds
This paper introduces PointConv, a novel convolution operation designed for 3D point clouds, which are irregular and unordered unlike regular dense grids in images. PointConv treats convolution kernels as nonlinear functions of the local coordinates of 3D points, combining weight and density functions. The weight functions are learned using multi-layer perceptrons (MLPs), while the density functions are estimated through kernel density estimation. A key contribution is an efficient reformulation that allows for scaling up the network and improving performance. PointConv can compute translation-invariant and permutation-invariant convolutions on any point set in 3D space and can also be used for deconvolution to propagate features from a subsampled point cloud back to its original resolution. Experiments on datasets like ModelNet40, ShapeNet, and ScanNet show that PointConv-based deep convolutional neural networks achieve state-of-the-art performance on challenging semantic segmentation tasks. Additionally, PointConv is shown to match the performance of 2D image convolutional networks on the CIFAR-10 dataset when converted into a point cloud. The paper also discusses related work and provides ablation studies to validate the effectiveness of PointConv's components.