Benchmarking Graph Neural Networks

Benchmarking Graph Neural Networks

28 Dec 2022 | Vijay Prakash Dwivedi, Chaitanya K. Joshi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, Xavier Bresson
This paper introduces an open-source benchmarking framework for Graph Neural Networks (GNNs) that is designed to facilitate the development and evaluation of GNN models. The framework includes a diverse collection of medium-scale mathematical and real-world graphs, enables fair model comparison with the same parameter budget, and provides an open-source, easy-to-use, and reproducible code infrastructure. As of December 2022, the GitHub repository has received over 2,000 stars and 380 forks, indicating its widespread use and utility in the GNN community. The benchmarking framework is built on the DGL and PyTorch libraries and includes 12 graph datasets, which are collected from real-world sources and generated from mathematical models. These datasets are suitable for academic research, covering medium scales and representing fundamental learning tasks at the graph, node, and edge levels. The framework also includes a modular coding infrastructure that supports data pipelines, GNN layers and models, training and evaluation functions, network and hyperparameter configurations, and scripts for reproducibility. The paper discusses the design choices behind the benchmark, including the use of medium-scale datasets to enable swift and reliable prototyping, the standardization of experimental protocols for fair comparisons, and the fixed parameter budgets for fair model comparison. The benchmark has been used to explore various aspects of GNNs, such as aggregation functions, expressive power, pooling mechanisms, normalization and regularization, and robustness and efficiency. A key contribution of the paper is the introduction of graph positional encoding (PE) in GNNs, which has been shown to improve the performance of message-passing based Graph Convolutional Networks (MP-GCNs) on synthetic and real-world datasets. The paper also presents additional studies on different GNN categories and edge representations for link prediction. The benchmarking framework is modular and easy to use, making it a valuable tool for researchers to test new ideas and explore insights in GNNs. The paper concludes by highlighting the importance of benchmarks in driving progress and identifying universal, generalizable, and scalable architectures in the field of graph machine learning.This paper introduces an open-source benchmarking framework for Graph Neural Networks (GNNs) that is designed to facilitate the development and evaluation of GNN models. The framework includes a diverse collection of medium-scale mathematical and real-world graphs, enables fair model comparison with the same parameter budget, and provides an open-source, easy-to-use, and reproducible code infrastructure. As of December 2022, the GitHub repository has received over 2,000 stars and 380 forks, indicating its widespread use and utility in the GNN community. The benchmarking framework is built on the DGL and PyTorch libraries and includes 12 graph datasets, which are collected from real-world sources and generated from mathematical models. These datasets are suitable for academic research, covering medium scales and representing fundamental learning tasks at the graph, node, and edge levels. The framework also includes a modular coding infrastructure that supports data pipelines, GNN layers and models, training and evaluation functions, network and hyperparameter configurations, and scripts for reproducibility. The paper discusses the design choices behind the benchmark, including the use of medium-scale datasets to enable swift and reliable prototyping, the standardization of experimental protocols for fair comparisons, and the fixed parameter budgets for fair model comparison. The benchmark has been used to explore various aspects of GNNs, such as aggregation functions, expressive power, pooling mechanisms, normalization and regularization, and robustness and efficiency. A key contribution of the paper is the introduction of graph positional encoding (PE) in GNNs, which has been shown to improve the performance of message-passing based Graph Convolutional Networks (MP-GCNs) on synthetic and real-world datasets. The paper also presents additional studies on different GNN categories and edge representations for link prediction. The benchmarking framework is modular and easy to use, making it a valuable tool for researchers to test new ideas and explore insights in GNNs. The paper concludes by highlighting the importance of benchmarks in driving progress and identifying universal, generalizable, and scalable architectures in the field of graph machine learning.
Reach us at info@study.space
[slides] Benchmarking Graph Neural Networks | StudySpace