This paper presents a fast JAX-based implementation of grid-dependent Physics-Informed Kolmogorov-Arnold Networks (PIKANs) for solving partial differential equations (PDEs). The authors propose an adaptive training scheme for PIKANs, incorporating known MLP-based PINN techniques, introducing an adaptive state transition scheme to avoid loss function peaks between grid updates, and proposing a methodology for designing PIKANs with alternative basis functions. Through comparative experiments, they demonstrate that these adaptive features significantly enhance training efficiency and solution accuracy. Their results illustrate the effectiveness of PIKANs in improving performance for PDE solutions, highlighting their potential as a superior alternative in scientific and engineering applications.
The paper discusses the challenges of training KANs, particularly the computational expense due to the use of learnable B-Splines as activation functions. To address this, the authors propose more computationally efficient activation functions, such as radial basis functions, Chebyshev polynomials, wavelets, and ReLU-based functions. However, they argue that removing the grid dependency of PIKANs may adversely affect training, resulting in slower convergence and potentially less accurate solutions. They emphasize the importance of preserving grid dependency to enable more adaptive training.
The authors introduce a new, publicly available computational framework for KANs, developed using the JAX and Flax Python libraries. They study an issue involving abrupt jumps in the loss function’s values after grid extensions and introduce an adaptive transition method to address it. They adapt loss re-weighting and collocation resampling schemes to a grid-dependent framework for training PIKANs with relatively small architectures. They also propose a general direction for designing (PI)KANs with alternative basis functions, emphasizing the importance of preserving their dependency on the grid to enable more adaptive training.
The paper presents results for four different PDEs: the diffusion equation, the Helmholtz equation, Burgers' equation, and the Allen–Cahn equation. The results show that the adaptive training techniques significantly improve the accuracy and efficiency of PIKANs compared to traditional MLP-based PINNs. The authors also demonstrate the effectiveness of their adaptive techniques in handling grid extensions and maintaining stable training performance. The paper concludes that PIKANs offer a promising alternative to traditional PINNs for solving PDEs, particularly in scientific and engineering applications.This paper presents a fast JAX-based implementation of grid-dependent Physics-Informed Kolmogorov-Arnold Networks (PIKANs) for solving partial differential equations (PDEs). The authors propose an adaptive training scheme for PIKANs, incorporating known MLP-based PINN techniques, introducing an adaptive state transition scheme to avoid loss function peaks between grid updates, and proposing a methodology for designing PIKANs with alternative basis functions. Through comparative experiments, they demonstrate that these adaptive features significantly enhance training efficiency and solution accuracy. Their results illustrate the effectiveness of PIKANs in improving performance for PDE solutions, highlighting their potential as a superior alternative in scientific and engineering applications.
The paper discusses the challenges of training KANs, particularly the computational expense due to the use of learnable B-Splines as activation functions. To address this, the authors propose more computationally efficient activation functions, such as radial basis functions, Chebyshev polynomials, wavelets, and ReLU-based functions. However, they argue that removing the grid dependency of PIKANs may adversely affect training, resulting in slower convergence and potentially less accurate solutions. They emphasize the importance of preserving grid dependency to enable more adaptive training.
The authors introduce a new, publicly available computational framework for KANs, developed using the JAX and Flax Python libraries. They study an issue involving abrupt jumps in the loss function’s values after grid extensions and introduce an adaptive transition method to address it. They adapt loss re-weighting and collocation resampling schemes to a grid-dependent framework for training PIKANs with relatively small architectures. They also propose a general direction for designing (PI)KANs with alternative basis functions, emphasizing the importance of preserving their dependency on the grid to enable more adaptive training.
The paper presents results for four different PDEs: the diffusion equation, the Helmholtz equation, Burgers' equation, and the Allen–Cahn equation. The results show that the adaptive training techniques significantly improve the accuracy and efficiency of PIKANs compared to traditional MLP-based PINNs. The authors also demonstrate the effectiveness of their adaptive techniques in handling grid extensions and maintaining stable training performance. The paper concludes that PIKANs offer a promising alternative to traditional PINNs for solving PDEs, particularly in scientific and engineering applications.