July 30, 2020 | Sifan Wang, Xinling Yu, Paris Perdikaris
Physics-informed neural networks (PINNs) have gained attention for their ability to solve partial differential equations (PDEs) in various scientific and engineering applications. However, their training via gradient descent is often unstable, especially when PDE solutions contain high-frequency or multi-scale features. This paper investigates the training dynamics of PINNs through the lens of the Neural Tangent Kernel (NTK), which captures the behavior of fully-connected neural networks in the infinite-width limit. The authors derive the NTK of PINNs and show that, under appropriate conditions, it converges to a deterministic kernel that remains constant during training. This allows for the analysis of training dynamics through the limiting NTK, revealing a discrepancy in the convergence rates of different loss components. To address this, the authors propose a novel gradient descent algorithm that uses NTK eigenvalues to adaptively calibrate the convergence rate of the total training error. Numerical experiments validate the theory and the effectiveness of the proposed algorithm. The paper also highlights the "spectral bias" of fully-connected networks, which limits their ability to learn high-frequency functions. The NTK framework provides a new perspective for analyzing PINN convergence and enables the design of more effective training algorithms. The study contributes to understanding the limitations of PINNs and offers a path for improving their trainability and accuracy.Physics-informed neural networks (PINNs) have gained attention for their ability to solve partial differential equations (PDEs) in various scientific and engineering applications. However, their training via gradient descent is often unstable, especially when PDE solutions contain high-frequency or multi-scale features. This paper investigates the training dynamics of PINNs through the lens of the Neural Tangent Kernel (NTK), which captures the behavior of fully-connected neural networks in the infinite-width limit. The authors derive the NTK of PINNs and show that, under appropriate conditions, it converges to a deterministic kernel that remains constant during training. This allows for the analysis of training dynamics through the limiting NTK, revealing a discrepancy in the convergence rates of different loss components. To address this, the authors propose a novel gradient descent algorithm that uses NTK eigenvalues to adaptively calibrate the convergence rate of the total training error. Numerical experiments validate the theory and the effectiveness of the proposed algorithm. The paper also highlights the "spectral bias" of fully-connected networks, which limits their ability to learn high-frequency functions. The NTK framework provides a new perspective for analyzing PINN convergence and enables the design of more effective training algorithms. The study contributes to understanding the limitations of PINNs and offers a path for improving their trainability and accuracy.