19 Aug 2019 | Guohao Li*, Matthias Müller*, Ali Thabet Bernard Ghanem
DeepGCNs: Can GCNs Go as Deep as CNNs?
DeepGCNs introduce deep graph convolutional networks (GCNs) by adapting concepts from deep convolutional neural networks (CNNs), such as residual/dense connections and dilated convolutions, to overcome the vanishing gradient problem that limits the depth of GCNs. This work presents a 56-layer GCN that significantly improves performance in point cloud semantic segmentation, achieving a +3.7% mIoU improvement over state-of-the-art methods. The proposed deep GCN architectures, including ResGCN and DenseGCN, enable deeper and more stable training of GCNs by incorporating residual connections, dense connections, and dilated convolutions. These methods are validated through extensive experiments on the S3DIS dataset, demonstrating the effectiveness of deep GCNs in handling non-Euclidean data. The results show that deep GCNs can achieve better performance than shallow GCNs, with ResGCN-56 outperforming other models in several key tasks. The study also highlights the importance of addressing the vanishing gradient problem and expanding the receptive field in GCNs to improve their performance on complex tasks. The work contributes to the advancement of GCN-based research by providing new methods for training deep GCNs and demonstrating their potential in various applications.DeepGCNs: Can GCNs Go as Deep as CNNs?
DeepGCNs introduce deep graph convolutional networks (GCNs) by adapting concepts from deep convolutional neural networks (CNNs), such as residual/dense connections and dilated convolutions, to overcome the vanishing gradient problem that limits the depth of GCNs. This work presents a 56-layer GCN that significantly improves performance in point cloud semantic segmentation, achieving a +3.7% mIoU improvement over state-of-the-art methods. The proposed deep GCN architectures, including ResGCN and DenseGCN, enable deeper and more stable training of GCNs by incorporating residual connections, dense connections, and dilated convolutions. These methods are validated through extensive experiments on the S3DIS dataset, demonstrating the effectiveness of deep GCNs in handling non-Euclidean data. The results show that deep GCNs can achieve better performance than shallow GCNs, with ResGCN-56 outperforming other models in several key tasks. The study also highlights the importance of addressing the vanishing gradient problem and expanding the receptive field in GCNs to improve their performance on complex tasks. The work contributes to the advancement of GCN-based research by providing new methods for training deep GCNs and demonstrating their potential in various applications.