KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning

KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning

1 Jul 2024 | Roman Bresson, Giannis Nikolentzos, George Panagopoulos, Michail Chatzianastasis, Jun Pang, Michalis Vazirgiannis
This paper explores the potential of Kolmogorov-Arnold Networks (KANs) in graph learning tasks, comparing their performance against Multi-Layer Perceptrons (MLPs). KANs, based on the Kolmogorov-Arnold representation theorem, offer a promising alternative to MLPs by using learnable activations based on B-splines and summations. The authors introduce two GNN models, KAGIN and KAGCN, which use KANs to update node representations in message passing layers. Extensive experiments on node classification, graph classification, and graph regression datasets show that while KANs perform similarly to MLPs in classification tasks, they exhibit a clear advantage in graph regression tasks. The preliminary results suggest that KANs could be more effective than MLPs in regression tasks, making them a valid alternative to traditional MLP-based models in graph learning. The paper also discusses potential advantages of KANs over MLPs, such as better handling of smooth functions and improved interpretability, and leaves further investigation of these advantages for future work.This paper explores the potential of Kolmogorov-Arnold Networks (KANs) in graph learning tasks, comparing their performance against Multi-Layer Perceptrons (MLPs). KANs, based on the Kolmogorov-Arnold representation theorem, offer a promising alternative to MLPs by using learnable activations based on B-splines and summations. The authors introduce two GNN models, KAGIN and KAGCN, which use KANs to update node representations in message passing layers. Extensive experiments on node classification, graph classification, and graph regression datasets show that while KANs perform similarly to MLPs in classification tasks, they exhibit a clear advantage in graph regression tasks. The preliminary results suggest that KANs could be more effective than MLPs in regression tasks, making them a valid alternative to traditional MLP-based models in graph learning. The paper also discusses potential advantages of KANs over MLPs, such as better handling of smooth functions and improved interpretability, and leaves further investigation of these advantages for future work.
Reach us at info@study.space
[slides and audio] KAGNNs%3A Kolmogorov-Arnold Networks meet Graph Learning