Transferable Neural Networks for Partial Differential Equations

Transferable Neural Networks for Partial Differential Equations

21 February 2024 | Zezhong Zhang, Feng Bao, Lili Ju, Guannan Zhang
The paper "Transferable Neural Networks for Partial Differential Equations" by Zezhong Zhang, Feng Bao, Lili Ju, and Guannan Zhang introduces a novel approach to enhance the transferability of neural networks for solving partial differential equations (PDEs). Traditional transfer learning methods for PDEs require extensive information about the target PDEs, such as its formulation or solution data, which can be time-consuming and limit the applicability of the pre-trained model. The authors propose a method to construct a transferable neural feature space for shallow neural networks without using PDE information. This involves re-parameterizing the hidden neurons to separate their location and shape parameters, and using auxiliary functions to tune the feature space. Theoretical analysis ensures that the neurons are uniformly distributed in the unit ball. The proposed feature space is then used with existing least squares solvers to obtain the weights of the output layer. Extensive numerical experiments demonstrate significant improvements in transferability and accuracy compared to state-of-the-art methods, showing that the same feature space can be effectively applied to various PDEs with different domains and boundary conditions.The paper "Transferable Neural Networks for Partial Differential Equations" by Zezhong Zhang, Feng Bao, Lili Ju, and Guannan Zhang introduces a novel approach to enhance the transferability of neural networks for solving partial differential equations (PDEs). Traditional transfer learning methods for PDEs require extensive information about the target PDEs, such as its formulation or solution data, which can be time-consuming and limit the applicability of the pre-trained model. The authors propose a method to construct a transferable neural feature space for shallow neural networks without using PDE information. This involves re-parameterizing the hidden neurons to separate their location and shape parameters, and using auxiliary functions to tune the feature space. Theoretical analysis ensures that the neurons are uniformly distributed in the unit ball. The proposed feature space is then used with existing least squares solvers to obtain the weights of the output layer. Extensive numerical experiments demonstrate significant improvements in transferability and accuracy compared to state-of-the-art methods, showing that the same feature space can be effectively applied to various PDEs with different domains and boundary conditions.
Reach us at info@study.space
[slides] Transferable Neural Networks for Partial Differential Equations | StudySpace