Graph U-Nets

Graph U-Nets

11 May 2019 | Hongyang Gao, Shuiwang Ji
This paper introduces Graph U-Nets (g-U-Nets), a novel architecture for graph data processing that extends the U-Net framework used in image segmentation to graph data. The key innovation lies in the development of graph pooling (gPool) and unpooling (gUnpool) operations, which enable adaptive downsampling and upsampling on graphs. The gPool layer selects a subset of nodes based on their scalar projection values on a trainable vector, while the gUnpool layer restores the graph structure using the positions of the selected nodes. These operations allow for high-level feature encoding and decoding in graph embedding tasks. The proposed g-U-Nets architecture consists of an encoder-decoder structure, where the encoder uses gPool layers to reduce graph size and the decoder uses gUnpool layers to restore it. The encoder also incorporates Graph Convolutional Networks (GCNs) for feature aggregation, while the decoder uses GCNs to aggregate information from the neighborhood. Skip connections between encoder and decoder layers help preserve spatial information, enhancing performance. To improve graph connectivity, the paper introduces graph power operations, which enhance the connectivity of sampled graphs by adding links between nodes within a certain number of hops. This helps mitigate the issue of isolated nodes in sampled graphs, improving information propagation. Experiments on node classification and graph classification tasks demonstrate that g-U-Nets outperform existing models in terms of accuracy. The results show that the proposed methods achieve state-of-the-art performance on benchmark datasets such as Cora, Citeseer, and Pubmed for transductive learning, and on datasets like D&D, PROTEINS, and COLLAB for inductive learning. Ablation studies confirm the effectiveness of gPool and gUnpool layers, as well as the benefits of graph connectivity augmentation. The paper also addresses the issue of overfitting by using L2 regularization and dropout, and shows that the additional parameters introduced by gPool and gUnpool layers are minimal and do not significantly increase the risk of overfitting. Overall, the proposed g-U-Nets provide a powerful framework for graph embedding and have shown promising results in various graph learning tasks.This paper introduces Graph U-Nets (g-U-Nets), a novel architecture for graph data processing that extends the U-Net framework used in image segmentation to graph data. The key innovation lies in the development of graph pooling (gPool) and unpooling (gUnpool) operations, which enable adaptive downsampling and upsampling on graphs. The gPool layer selects a subset of nodes based on their scalar projection values on a trainable vector, while the gUnpool layer restores the graph structure using the positions of the selected nodes. These operations allow for high-level feature encoding and decoding in graph embedding tasks. The proposed g-U-Nets architecture consists of an encoder-decoder structure, where the encoder uses gPool layers to reduce graph size and the decoder uses gUnpool layers to restore it. The encoder also incorporates Graph Convolutional Networks (GCNs) for feature aggregation, while the decoder uses GCNs to aggregate information from the neighborhood. Skip connections between encoder and decoder layers help preserve spatial information, enhancing performance. To improve graph connectivity, the paper introduces graph power operations, which enhance the connectivity of sampled graphs by adding links between nodes within a certain number of hops. This helps mitigate the issue of isolated nodes in sampled graphs, improving information propagation. Experiments on node classification and graph classification tasks demonstrate that g-U-Nets outperform existing models in terms of accuracy. The results show that the proposed methods achieve state-of-the-art performance on benchmark datasets such as Cora, Citeseer, and Pubmed for transductive learning, and on datasets like D&D, PROTEINS, and COLLAB for inductive learning. Ablation studies confirm the effectiveness of gPool and gUnpool layers, as well as the benefits of graph connectivity augmentation. The paper also addresses the issue of overfitting by using L2 regularization and dropout, and shows that the additional parameters introduced by gPool and gUnpool layers are minimal and do not significantly increase the risk of overfitting. Overall, the proposed g-U-Nets provide a powerful framework for graph embedding and have shown promising results in various graph learning tasks.
Reach us at info@study.space
[slides] Graph U-Nets | StudySpace