5 Apr 2024 | Md Ashiqur Rahman, Robert Joseph George, Mogab Elleithy, Daniel Leibovici, Zongyi Li, Boris Bonev, Colin White, Julius Berner, Raymond A. Yeh, Jean Kossaifi, Kamyar Azizzadenesheli, Anima Anandkumar
The paper introduces Codomain Attention Neural Operator (CoDA-NO), a novel neural operator architecture designed to solve multiphysics partial differential equations (PDEs) with complex geometries and interactions between physical variables. CoDA-NO tokenizes functions along the codomain or channel space, enabling self-supervised learning and pretraining of multiple PDE systems. The architecture extends positional encoding, self-attention, and normalization layers to function space, allowing it to learn representations of different PDE systems with a single model. Evaluations on fluid flow simulations and fluid-structure interaction tasks show that CoDA-NO outperforms existing methods by over 36% in few-shot learning settings, demonstrating its effectiveness in handling complex downstream tasks with limited data. The code for CoDA-NO is available at <https://github.com/ashiq24/CoDA-NO>.The paper introduces Codomain Attention Neural Operator (CoDA-NO), a novel neural operator architecture designed to solve multiphysics partial differential equations (PDEs) with complex geometries and interactions between physical variables. CoDA-NO tokenizes functions along the codomain or channel space, enabling self-supervised learning and pretraining of multiple PDE systems. The architecture extends positional encoding, self-attention, and normalization layers to function space, allowing it to learn representations of different PDE systems with a single model. Evaluations on fluid flow simulations and fluid-structure interaction tasks show that CoDA-NO outperforms existing methods by over 36% in few-shot learning settings, demonstrating its effectiveness in handling complex downstream tasks with limited data. The code for CoDA-NO is available at <https://github.com/ashiq24/CoDA-NO>.