10 Jun 2024 | Mehrdad Kiamari, Mohammad Kiamari, Bhaskar Krishnamachari
This paper introduces Graph Kolmogorov-Arnold Networks (GKAN), an innovative neural network architecture that extends the principles of Kolmogorov-Arnold Networks (KAN) to graph-structured data. GKANs use learnable univariate functions instead of fixed linear weights, enabling more flexible and efficient processing of graph-based data. Unlike traditional Graph Convolutional Networks (GCNs), which rely on fixed convolutional architectures, GKANs implement learnable spline-based functions between layers, transforming the way information is processed across the graph structure. Two architectures are proposed: one where the learnable functions are applied after aggregation and another where they are applied before aggregation. Empirical evaluations on the Cora dataset show that GKANs achieve higher accuracy in semi-supervised learning tasks compared to GCNs. For example, with 100 features, GKANs achieve 61.76% accuracy compared to 53.5% for GCNs, and with 200 features, 67.66% versus 61.24%. The paper also presents results on the impact of various parameters such as the number of hidden nodes, grid size, and spline order on the performance of GKAN. GKANs are shown to be more efficient and effective in graph-based learning tasks, offering a new approach to graph representation learning that could serve as a foundation for various graph deep learning schemes.This paper introduces Graph Kolmogorov-Arnold Networks (GKAN), an innovative neural network architecture that extends the principles of Kolmogorov-Arnold Networks (KAN) to graph-structured data. GKANs use learnable univariate functions instead of fixed linear weights, enabling more flexible and efficient processing of graph-based data. Unlike traditional Graph Convolutional Networks (GCNs), which rely on fixed convolutional architectures, GKANs implement learnable spline-based functions between layers, transforming the way information is processed across the graph structure. Two architectures are proposed: one where the learnable functions are applied after aggregation and another where they are applied before aggregation. Empirical evaluations on the Cora dataset show that GKANs achieve higher accuracy in semi-supervised learning tasks compared to GCNs. For example, with 100 features, GKANs achieve 61.76% accuracy compared to 53.5% for GCNs, and with 200 features, 67.66% versus 61.24%. The paper also presents results on the impact of various parameters such as the number of hidden nodes, grid size, and spline order on the performance of GKAN. GKANs are shown to be more efficient and effective in graph-based learning tasks, offering a new approach to graph representation learning that could serve as a foundation for various graph deep learning schemes.