4 Jul 2020 | Ming Chen, Zhewei Wei, Zengfeng Huang, Bolin Ding, Yaliang Li
This paper proposes GCNII, a deep graph convolutional network that effectively addresses the over-smoothing problem in graph neural networks. GCNII extends the vanilla GCN model with two simple yet effective techniques: initial residual connection and identity mapping. The initial residual connection creates a skip connection from the input layer to the current layer, while identity mapping adds an identity matrix to the weight matrix. These techniques help prevent over-smoothing and improve the performance of deep GCN models.
Theoretical analysis shows that GCNII can express a polynomial filter of arbitrary order, which is essential for designing deep neural networks. Empirical studies demonstrate that GCNII outperforms state-of-the-art methods on various semi- and full-supervised tasks. Experiments show that the deep GCNII model achieves new state-of-the-art results on multiple datasets, including Cora, Citeseer, Pubmed, Chameleon, Cornell, Texas, and Wisconsin. Additionally, GCNII performs well on inductive learning tasks, such as the PPI dataset, where it achieves state-of-the-art results with a 9-layer model.
The paper also provides a spectral analysis of GCN and GCNII, showing that nodes with higher degrees are more likely to suffer from over-smoothing. The analysis suggests that the convergence rate of a node depends on its degree, and experiments confirm this theoretical finding. The ablation study shows that both initial residual connection and identity mapping are necessary to solve the over-smoothing problem.
GCNII is a simple and effective deep graph convolutional network that addresses the over-smoothing problem and achieves state-of-the-art results on various tasks. The model's ability to express arbitrary polynomial filters and its effective use of initial residual connection and identity mapping make it a promising approach for deep graph neural networks.This paper proposes GCNII, a deep graph convolutional network that effectively addresses the over-smoothing problem in graph neural networks. GCNII extends the vanilla GCN model with two simple yet effective techniques: initial residual connection and identity mapping. The initial residual connection creates a skip connection from the input layer to the current layer, while identity mapping adds an identity matrix to the weight matrix. These techniques help prevent over-smoothing and improve the performance of deep GCN models.
Theoretical analysis shows that GCNII can express a polynomial filter of arbitrary order, which is essential for designing deep neural networks. Empirical studies demonstrate that GCNII outperforms state-of-the-art methods on various semi- and full-supervised tasks. Experiments show that the deep GCNII model achieves new state-of-the-art results on multiple datasets, including Cora, Citeseer, Pubmed, Chameleon, Cornell, Texas, and Wisconsin. Additionally, GCNII performs well on inductive learning tasks, such as the PPI dataset, where it achieves state-of-the-art results with a 9-layer model.
The paper also provides a spectral analysis of GCN and GCNII, showing that nodes with higher degrees are more likely to suffer from over-smoothing. The analysis suggests that the convergence rate of a node depends on its degree, and experiments confirm this theoretical finding. The ablation study shows that both initial residual connection and identity mapping are necessary to solve the over-smoothing problem.
GCNII is a simple and effective deep graph convolutional network that addresses the over-smoothing problem and achieves state-of-the-art results on various tasks. The model's ability to express arbitrary polynomial filters and its effective use of initial residual connection and identity mapping make it a promising approach for deep graph neural networks.