Unsupervised Graph Neural Architecture Search with Disentangled Self-supervision

Unsupervised Graph Neural Architecture Search with Disentangled Self-supervision

8 Mar 2024 | Zeyang Zhang, Xin Wang, Ziwei Zhang, Guangyao Shen, Shiqi Shen, Wenwu Zhu
This paper introduces a novel unsupervised graph neural architecture search (GNAS) method called Disentangled Self-supervised Graph Neural Architecture Search (DSGAS). The method aims to discover optimal graph neural network (GNN) architectures without relying on supervised labels, which is a challenging task due to the entanglement of latent graph factors and neural architectures. DSGAS addresses this by proposing a disentangled graph super-network that can simultaneously optimize multiple architectures with factor-wise disentanglement. It also incorporates self-supervised training with joint architecture-graph disentanglement and contrastive search with architecture augmentations to discover architectures with factor-specific expertise. The key components of DSGAS include a disentangled graph architecture super-network, which enables the simultaneous optimization of multiple architectures with respect to various latent factors. The method then uses self-supervised training to estimate the performance of architectures under different factors by considering the relationship between architectures, graphs, and factors. Finally, a contrastive search with architecture augmentations is introduced to discover architectures with distinct capabilities of capturing various factors. Extensive experiments on 11 real-world datasets demonstrate that DSGAS achieves state-of-the-art performance in both unsupervised and semi-supervised settings. The results show that DSGAS significantly outperforms existing GNAS baselines, highlighting its effectiveness in discovering optimal GNN architectures without labels. The method's ability to disentangle latent factors and architectures contributes to its success in automated GNN design. The paper also discusses related works, including graph neural architecture search, unsupervised neural architecture search, and graph self-supervised learning, and highlights the novelty and contributions of DSGAS in the field of unsupervised GNAS.This paper introduces a novel unsupervised graph neural architecture search (GNAS) method called Disentangled Self-supervised Graph Neural Architecture Search (DSGAS). The method aims to discover optimal graph neural network (GNN) architectures without relying on supervised labels, which is a challenging task due to the entanglement of latent graph factors and neural architectures. DSGAS addresses this by proposing a disentangled graph super-network that can simultaneously optimize multiple architectures with factor-wise disentanglement. It also incorporates self-supervised training with joint architecture-graph disentanglement and contrastive search with architecture augmentations to discover architectures with factor-specific expertise. The key components of DSGAS include a disentangled graph architecture super-network, which enables the simultaneous optimization of multiple architectures with respect to various latent factors. The method then uses self-supervised training to estimate the performance of architectures under different factors by considering the relationship between architectures, graphs, and factors. Finally, a contrastive search with architecture augmentations is introduced to discover architectures with distinct capabilities of capturing various factors. Extensive experiments on 11 real-world datasets demonstrate that DSGAS achieves state-of-the-art performance in both unsupervised and semi-supervised settings. The results show that DSGAS significantly outperforms existing GNAS baselines, highlighting its effectiveness in discovering optimal GNN architectures without labels. The method's ability to disentangle latent factors and architectures contributes to its success in automated GNN design. The paper also discusses related works, including graph neural architecture search, unsupervised neural architecture search, and graph self-supervised learning, and highlights the novelty and contributions of DSGAS in the field of unsupervised GNAS.
Reach us at info@study.space
[slides] Unsupervised Graph Neural Architecture Search with Disentangled Self-Supervision | StudySpace