Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs

Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs

5 Apr 2024 | Md Ashiqu Rahman, Robert Joseph George, Mogab Elleithy, Daniel Leibovici, Zongyi Li, Boris Bonev, Colin White, Julius Berner, Raymond A. Yeh, Jean Kossaifi, Kamyar Azizzadenesheli, Anima Anandkumar
This paper introduces CoDA-NO, a novel neural operator architecture designed for efficient learning and solving of partial differential equations (PDEs) in multiphysics scenarios. The proposed architecture extends transformer mechanisms to function spaces by computing attention across codomains, enabling the model to handle multiple PDE systems with varying numbers of variables and geometries. CoDA-NO tokenizes functions along the codomain or channel space, allowing for self-supervised learning and pretraining across different PDE systems. The model is trained on fluid dynamics data and can be adapted to multi-physics fluid-solid interaction systems with additional variables. The key contributions of CoDA-NO include its ability to learn representations of different PDE systems with a single model, its effectiveness in few-shot learning settings, and its performance in complex downstream tasks with limited data. The model outperforms existing methods by over 36% in few-shot learning tasks, demonstrating its sample efficiency and generalization capability. CoDA-NO is evaluated on fluid dynamics and fluid-structure interaction problems, showing robustness in handling missing variables and adapting to new PDEs. The architecture is composed of several components, including permutation equivariant neural operators, function space normalization, and variable-specific positional encoding. These components work together to enable the model to handle non-uniform geometries and adapt to new physical variables during fine-tuning. The model is pre-trained on fluid dynamics data and can be fine-tuned for different PDE systems with minimal additional data. Experiments show that CoDA-NO outperforms baselines in both few-shot learning and scenarios with abundant data. The model's performance is evaluated on various datasets, including fluid-structure interaction and fluid dynamics problems, demonstrating its effectiveness in complex multiphysics simulations. The paper also discusses the computational cost and parameter count of the model, highlighting its efficiency and scalability. Overall, CoDA-NO is presented as a versatile and effective foundation model for solving PDEs in multiphysics scenarios.This paper introduces CoDA-NO, a novel neural operator architecture designed for efficient learning and solving of partial differential equations (PDEs) in multiphysics scenarios. The proposed architecture extends transformer mechanisms to function spaces by computing attention across codomains, enabling the model to handle multiple PDE systems with varying numbers of variables and geometries. CoDA-NO tokenizes functions along the codomain or channel space, allowing for self-supervised learning and pretraining across different PDE systems. The model is trained on fluid dynamics data and can be adapted to multi-physics fluid-solid interaction systems with additional variables. The key contributions of CoDA-NO include its ability to learn representations of different PDE systems with a single model, its effectiveness in few-shot learning settings, and its performance in complex downstream tasks with limited data. The model outperforms existing methods by over 36% in few-shot learning tasks, demonstrating its sample efficiency and generalization capability. CoDA-NO is evaluated on fluid dynamics and fluid-structure interaction problems, showing robustness in handling missing variables and adapting to new PDEs. The architecture is composed of several components, including permutation equivariant neural operators, function space normalization, and variable-specific positional encoding. These components work together to enable the model to handle non-uniform geometries and adapt to new physical variables during fine-tuning. The model is pre-trained on fluid dynamics data and can be fine-tuned for different PDE systems with minimal additional data. Experiments show that CoDA-NO outperforms baselines in both few-shot learning and scenarios with abundant data. The model's performance is evaluated on various datasets, including fluid-structure interaction and fluid dynamics problems, demonstrating its effectiveness in complex multiphysics simulations. The paper also discusses the computational cost and parameter count of the model, highlighting its efficiency and scalability. Overall, CoDA-NO is presented as a versatile and effective foundation model for solving PDEs in multiphysics scenarios.
Reach us at info@study.space
[slides and audio] Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs