DeepGCNs: Can GCNs Go as Deep as CNNs?

DeepGCNs: Can GCNs Go as Deep as CNNs?

19 Aug 2019 | Guohao Li*, Matthias Müller*, Ali Thabet Bernard Ghanem
DeepGCNs: Can GCNs Go as Deep as CNNs? DeepGCNs introduce new methods to train very deep Graph Convolutional Networks (GCNs) by adapting concepts from Convolutional Neural Networks (CNNs), such as residual/dense connections and dilated convolutions. The paper addresses the challenge of training deep GCNs, which are typically limited to shallow models due to the vanishing gradient problem. By incorporating these CNN-inspired techniques, the authors demonstrate that deep GCNs can be effectively trained and achieve significant performance improvements. They present a 56-layer GCN that outperforms state-of-the-art models in point cloud semantic segmentation by 3.7% in mIoU. The study shows that residual connections, dense connections, and dilated convolutions help alleviate the vanishing gradient problem and improve the stability and performance of deep GCNs. The proposed methods enable deeper GCNs to capture more complex features and handle non-Euclidean data more effectively. The results highlight the potential of deep GCNs in various applications, including point cloud segmentation, and suggest that these techniques can significantly advance GCN-based research.DeepGCNs: Can GCNs Go as Deep as CNNs? DeepGCNs introduce new methods to train very deep Graph Convolutional Networks (GCNs) by adapting concepts from Convolutional Neural Networks (CNNs), such as residual/dense connections and dilated convolutions. The paper addresses the challenge of training deep GCNs, which are typically limited to shallow models due to the vanishing gradient problem. By incorporating these CNN-inspired techniques, the authors demonstrate that deep GCNs can be effectively trained and achieve significant performance improvements. They present a 56-layer GCN that outperforms state-of-the-art models in point cloud semantic segmentation by 3.7% in mIoU. The study shows that residual connections, dense connections, and dilated convolutions help alleviate the vanishing gradient problem and improve the stability and performance of deep GCNs. The proposed methods enable deeper GCNs to capture more complex features and handle non-Euclidean data more effectively. The results highlight the potential of deep GCNs in various applications, including point cloud segmentation, and suggest that these techniques can significantly advance GCN-based research.
Reach us at info@study.space