The paper addresses the challenge of unsupervised graph neural architecture search (GNAS), a problem where existing methods heavily rely on supervised labels, which are often scarce or unavailable in real-world applications. To tackle this, the authors propose a novel method called Disentangled Self-supervised Graph Neural Architecture Search (DSGAS). DSGAS aims to discover optimal GNN architectures without labels by disentangling latent graph factors and optimizing multiple architectures simultaneously. The key contributions of DSGAS include:
1. **Disentangled Graph Architecture Super-Network**: This network incorporates multiple architectures with factor-wise disentanglement, allowing simultaneous optimization of different architectures.
2. **Self-supervised Training with Joint Architecture-Graph Disentanglement**: This module estimates the performance of architectures under various latent factors by considering the relationship between architectures, graphs, and factors.
3. **Contrastive Search with Architecture Augmentations**: This module encourages the discovery of architectures with distinct capabilities of capturing different factors through instance discrimination tasks.
Experiments on 11 real-world datasets demonstrate that DSGAS outperforms state-of-the-art GNAS baselines in both unsupervised and semi-supervised settings, showing its effectiveness in discovering effective GNN architectures without supervised labels. The paper also includes detailed ablation studies and analyses to validate the effectiveness of each component of the proposed framework.The paper addresses the challenge of unsupervised graph neural architecture search (GNAS), a problem where existing methods heavily rely on supervised labels, which are often scarce or unavailable in real-world applications. To tackle this, the authors propose a novel method called Disentangled Self-supervised Graph Neural Architecture Search (DSGAS). DSGAS aims to discover optimal GNN architectures without labels by disentangling latent graph factors and optimizing multiple architectures simultaneously. The key contributions of DSGAS include:
1. **Disentangled Graph Architecture Super-Network**: This network incorporates multiple architectures with factor-wise disentanglement, allowing simultaneous optimization of different architectures.
2. **Self-supervised Training with Joint Architecture-Graph Disentanglement**: This module estimates the performance of architectures under various latent factors by considering the relationship between architectures, graphs, and factors.
3. **Contrastive Search with Architecture Augmentations**: This module encourages the discovery of architectures with distinct capabilities of capturing different factors through instance discrimination tasks.
Experiments on 11 real-world datasets demonstrate that DSGAS outperforms state-of-the-art GNAS baselines in both unsupervised and semi-supervised settings, showing its effectiveness in discovering effective GNN architectures without supervised labels. The paper also includes detailed ablation studies and analyses to validate the effectiveness of each component of the proposed framework.