WHEN AND WHY PINNs FAIL TO TRAIN: A NEURAL TANGENT KERNEL PERSPECTIVE

WHEN AND WHY PINNs FAIL TO TRAIN: A NEURAL TANGENT KERNEL PERSPECTIVE

July 30, 2020 | Sifan Wang, Xinling Yu, Paris Perdikaris
This paper investigates the training dynamics of Physics-informed Neural Networks (PINNs) using the lens of the Neural Tangent Kernel (NTK). PINNs have gained significant attention for their ability to solve a wide range of problems involving partial differential equations, but their training behavior remains poorly understood. The authors derive the NTK of PINNs and prove that under appropriate conditions, it converges to a deterministic kernel that remains constant during training in the infinite-width limit. This allows for a detailed analysis of the training dynamics and reveals a significant discrepancy in the convergence rates of different loss components. To address this issue, the authors propose a novel gradient descent algorithm that adaptively calibrates the convergence rate of the total training error using the eigenvalues of the NTK. Numerical experiments validate the theoretical findings and demonstrate the effectiveness of the proposed algorithm in improving the trainability and predictive accuracy of PINNs. The paper also provides insights into the spectral bias and convergence behavior of fully-connected PINNs, offering a new perspective on their training challenges.This paper investigates the training dynamics of Physics-informed Neural Networks (PINNs) using the lens of the Neural Tangent Kernel (NTK). PINNs have gained significant attention for their ability to solve a wide range of problems involving partial differential equations, but their training behavior remains poorly understood. The authors derive the NTK of PINNs and prove that under appropriate conditions, it converges to a deterministic kernel that remains constant during training in the infinite-width limit. This allows for a detailed analysis of the training dynamics and reveals a significant discrepancy in the convergence rates of different loss components. To address this issue, the authors propose a novel gradient descent algorithm that adaptively calibrates the convergence rate of the total training error using the eigenvalues of the NTK. Numerical experiments validate the theoretical findings and demonstrate the effectiveness of the proposed algorithm in improving the trainability and predictive accuracy of PINNs. The paper also provides insights into the spectral bias and convergence behavior of fully-connected PINNs, offering a new perspective on their training challenges.
Reach us at info@study.space
Understanding When and why PINNs fail to train%3A A neural tangent kernel perspective