10 Jun 2020 | Kaveh Hassani, Amir Hosein Khasahmadi
The paper introduces a self-supervised approach for learning node and graph representations by contrasting structural views of graphs, specifically focusing on first-order neighbors and graph diffusion. Unlike visual representation learning, increasing the number of views or contrasting multi-scale encodings does not improve performance. The method achieves state-of-the-art results on 8 out of 8 node and graph classification benchmarks under the linear evaluation protocol, outperforming previous unsupervised and supervised baselines in 4 out of 8 benchmarks. Key findings include the effectiveness of contrasting encodings from first-order neighbors and graph diffusion, the superiority of simple graph readout layers over hierarchical pooling methods, and the negative impact of regularization and normalization layers. The approach leverages graph neural networks (GNNs) and mutual information maximization (MI) to learn rich representations without task-dependent labels, making it particularly useful for graphs with complex structures and limited labeled data.The paper introduces a self-supervised approach for learning node and graph representations by contrasting structural views of graphs, specifically focusing on first-order neighbors and graph diffusion. Unlike visual representation learning, increasing the number of views or contrasting multi-scale encodings does not improve performance. The method achieves state-of-the-art results on 8 out of 8 node and graph classification benchmarks under the linear evaluation protocol, outperforming previous unsupervised and supervised baselines in 4 out of 8 benchmarks. Key findings include the effectiveness of contrasting encodings from first-order neighbors and graph diffusion, the superiority of simple graph readout layers over hierarchical pooling methods, and the negative impact of regularization and normalization layers. The approach leverages graph neural networks (GNNs) and mutual information maximization (MI) to learn rich representations without task-dependent labels, making it particularly useful for graphs with complex structures and limited labeled data.