KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning

KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning

1 Jul 2024 | Roman Bresson, Giannis Nikolentzos, George Panagopoulos, Michail Chatzianastasis, Michalis Vazirgiannis, Jun Pang
KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning This paper introduces Kolmogorov-Arnold Networks (KANs) as an alternative to Multi-Layer Perceptrons (MLPs) in Graph Neural Networks (GNNs). KANs are based on the Kolmogorov-Arnold representation theorem, which allows them to represent continuous multivariate functions as a composition and sum of univariate functions. This makes KANs more interpretable and potentially more accurate than MLPs in low-dimensional settings. The authors compare the performance of KAN-based GNNs (KAGIN and KAGCN) with MLP-based GNNs (GIN and GCN) on node classification, graph classification, and graph regression tasks. The results show that KAN-based models perform similarly to MLP-based models in classification tasks but have a clear advantage in graph regression tasks. KAGIN and KAGCN outperform GIN and GCN in several datasets, particularly in graph regression. However, KANs are computationally more expensive than MLPs, and their performance is sensitive to hyperparameters such as grid size and spline order. Despite this, KANs offer a more expressive power than MLPs in some cases, especially when dealing with continuous features. The paper also discusses the potential advantages of KANs over MLPs, including their ability to accurately fit smooth functions and their interpretability. The authors conclude that KAN-based GNNs are valid alternatives to traditional MLP-based models and deserve further investigation.KAGNNs: Kolmogorov-Arnold Networks meet Graph Learning This paper introduces Kolmogorov-Arnold Networks (KANs) as an alternative to Multi-Layer Perceptrons (MLPs) in Graph Neural Networks (GNNs). KANs are based on the Kolmogorov-Arnold representation theorem, which allows them to represent continuous multivariate functions as a composition and sum of univariate functions. This makes KANs more interpretable and potentially more accurate than MLPs in low-dimensional settings. The authors compare the performance of KAN-based GNNs (KAGIN and KAGCN) with MLP-based GNNs (GIN and GCN) on node classification, graph classification, and graph regression tasks. The results show that KAN-based models perform similarly to MLP-based models in classification tasks but have a clear advantage in graph regression tasks. KAGIN and KAGCN outperform GIN and GCN in several datasets, particularly in graph regression. However, KANs are computationally more expensive than MLPs, and their performance is sensitive to hyperparameters such as grid size and spline order. Despite this, KANs offer a more expressive power than MLPs in some cases, especially when dealing with continuous features. The paper also discusses the potential advantages of KANs over MLPs, including their ability to accurately fit smooth functions and their interpretability. The authors conclude that KAN-based GNNs are valid alternatives to traditional MLP-based models and deserve further investigation.
Reach us at info@study.space
[slides and audio] KAGNNs%3A Kolmogorov-Arnold Networks meet Graph Learning