2024 | Zhanhong Ye, Xiang Huang, Leheng Chen, Hongsheng Liu, Zidong Wang, Bin Dong
PDEformer is a neural solver for partial differential equations (PDEs) that can handle various types of PDEs. The paper introduces a foundation model for PDEs by representing the PDE as a computational graph, integrating both symbolic and numerical information. A graph Transformer and an implicit neural representation (INR) are used to generate mesh-free predicted solutions. After pretraining on diverse PDE data, PDEformer achieves zero-shot accuracy comparable to expert models on benchmark datasets. It also shows promise in inverse problems, such as recovering PDE coefficients.
The model is designed for 1D time-dependent PDEs on the domain (t, x) ∈ [0, 1] × [-1, 1] with periodic boundary conditions. The symbolic form of the PDE is represented as a computational graph, with nodes for unknown fields, scalar coefficients, initial conditions, and operations. Each node is assigned a feature vector, and the graph is processed by a graph Transformer to generate a latent code. An INR then uses this latent code to produce mesh-free predictions.
The model is pretrained on a dataset of 500k samples, achieving a relative L² error of 0.0104 on the training set and 0.0128 on the test set. It performs well on forward problems, including Burgers', Advection, and 1D Reaction-Diffusion PDEs, outperforming baseline models. In inverse problems, PDEformer recovers PDE coefficients from noisy observations using particle swarm optimization, showing high accuracy even under high noise levels.
The model's architecture includes a graph Transformer and an INR, with the graph Transformer processing the symbolic and numerical information, and the INR generating the solution. The model is efficient and adaptable, demonstrating robustness and versatility in solving a wide range of PDEs. The results show that PDEformer achieves high accuracy and can be fine-tuned with limited data, making it a promising foundation model for PDEs.PDEformer is a neural solver for partial differential equations (PDEs) that can handle various types of PDEs. The paper introduces a foundation model for PDEs by representing the PDE as a computational graph, integrating both symbolic and numerical information. A graph Transformer and an implicit neural representation (INR) are used to generate mesh-free predicted solutions. After pretraining on diverse PDE data, PDEformer achieves zero-shot accuracy comparable to expert models on benchmark datasets. It also shows promise in inverse problems, such as recovering PDE coefficients.
The model is designed for 1D time-dependent PDEs on the domain (t, x) ∈ [0, 1] × [-1, 1] with periodic boundary conditions. The symbolic form of the PDE is represented as a computational graph, with nodes for unknown fields, scalar coefficients, initial conditions, and operations. Each node is assigned a feature vector, and the graph is processed by a graph Transformer to generate a latent code. An INR then uses this latent code to produce mesh-free predictions.
The model is pretrained on a dataset of 500k samples, achieving a relative L² error of 0.0104 on the training set and 0.0128 on the test set. It performs well on forward problems, including Burgers', Advection, and 1D Reaction-Diffusion PDEs, outperforming baseline models. In inverse problems, PDEformer recovers PDE coefficients from noisy observations using particle swarm optimization, showing high accuracy even under high noise levels.
The model's architecture includes a graph Transformer and an INR, with the graph Transformer processing the symbolic and numerical information, and the INR generating the solution. The model is efficient and adaptable, demonstrating robustness and versatility in solving a wide range of PDEs. The results show that PDEformer achieves high accuracy and can be fine-tuned with limited data, making it a promising foundation model for PDEs.