20 Jun 2024 | Eleonora Poeta, Flavio Giobergia, Eliana Pastor, Tania Cerquitelli, Elena Baralis
This paper presents a benchmarking study comparing Kolmogorov-Arnold Networks (KANs) and Multi-Layer Perceptrons (MLPs) on tabular datasets. KANs, inspired by the Kolmogorov-Arnold theorem, offer a more interpretable framework by incorporating learnable activation functions on edges, enhancing accuracy and interpretability. The study evaluates task performance and training times across various datasets from the UCI Machine Learning Repository, including Breast Cancer, Poker, and Musk. Results show that KANs achieve superior or comparable accuracy and F1 scores, particularly in datasets with numerous instances, suggesting robust handling of complex data. However, KANs come with a higher computational cost compared to MLPs of similar sizes. The study concludes that KANs are a viable alternative to MLPs, with potential for broader application in real-world contexts, especially in handling complex datasets effectively. Future research could explore KANs in regression tasks and diverse data types.This paper presents a benchmarking study comparing Kolmogorov-Arnold Networks (KANs) and Multi-Layer Perceptrons (MLPs) on tabular datasets. KANs, inspired by the Kolmogorov-Arnold theorem, offer a more interpretable framework by incorporating learnable activation functions on edges, enhancing accuracy and interpretability. The study evaluates task performance and training times across various datasets from the UCI Machine Learning Repository, including Breast Cancer, Poker, and Musk. Results show that KANs achieve superior or comparable accuracy and F1 scores, particularly in datasets with numerous instances, suggesting robust handling of complex data. However, KANs come with a higher computational cost compared to MLPs of similar sizes. The study concludes that KANs are a viable alternative to MLPs, with potential for broader application in real-world contexts, especially in handling complex datasets effectively. Future research could explore KANs in regression tasks and diverse data types.