RiemannONets: Interpretable Neural Operators for Riemann Problems

RiemannONets: Interpretable Neural Operators for Riemann Problems

16 Apr 2024 | Ahmad Peyvan, Vivek Oommen, Ameya D. Jagtap, George Em Karniadakis
The paper "RiemannONets: Interpretable Neural Operators for Riemann Problems" by Ahmad Peyvan, Vivek Oommen, Ameya D. Jagtap, and George Em Karniadakis explores the use of neural operators to solve Riemann problems in compressible flows with extreme pressure jumps. The authors focus on two neural operators: DeepONet and U-Net. DeepONet is trained in a two-stage process, first extracting and orthonormalizing a basis from the trunk net, and then using this basis to train the branch net. This approach significantly improves the accuracy, efficiency, and robustness of DeepONet. The U-Net, a multiscale convolutional network, is also conditioned on initial pressure and temperature states to enhance its performance. The study compares the accuracy of these neural operators across low, intermediate, and very high-pressure ratios, demonstrating that both methods can achieve very accurate solutions. The results show that the two-step training procedure for DeepONet outperforms vanilla DeepONet and U-Net in terms of accuracy and computational efficiency. The paper also analyzes the hierarchical and interpretable basis functions learned by both neural operators, providing insights into their representation capabilities. Key contributions of the study include: - Leveraging deep neural operators to map input pressure ratios to final solutions. - Investigating the effectiveness of adaptive activation functions in improving accuracy. - Comparing the performance of DeepONet and U-Net across different pressure ratios. - Analyzing the basis functions learned by both neural operators to understand their representation capabilities. The source code and data for the experiments are available at <https://github.com/apey236/RiemannONet/tree/main>.The paper "RiemannONets: Interpretable Neural Operators for Riemann Problems" by Ahmad Peyvan, Vivek Oommen, Ameya D. Jagtap, and George Em Karniadakis explores the use of neural operators to solve Riemann problems in compressible flows with extreme pressure jumps. The authors focus on two neural operators: DeepONet and U-Net. DeepONet is trained in a two-stage process, first extracting and orthonormalizing a basis from the trunk net, and then using this basis to train the branch net. This approach significantly improves the accuracy, efficiency, and robustness of DeepONet. The U-Net, a multiscale convolutional network, is also conditioned on initial pressure and temperature states to enhance its performance. The study compares the accuracy of these neural operators across low, intermediate, and very high-pressure ratios, demonstrating that both methods can achieve very accurate solutions. The results show that the two-step training procedure for DeepONet outperforms vanilla DeepONet and U-Net in terms of accuracy and computational efficiency. The paper also analyzes the hierarchical and interpretable basis functions learned by both neural operators, providing insights into their representation capabilities. Key contributions of the study include: - Leveraging deep neural operators to map input pressure ratios to final solutions. - Investigating the effectiveness of adaptive activation functions in improving accuracy. - Comparing the performance of DeepONet and U-Net across different pressure ratios. - Analyzing the basis functions learned by both neural operators to understand their representation capabilities. The source code and data for the experiments are available at <https://github.com/apey236/RiemannONet/tree/main>.
Reach us at info@study.space
Understanding RiemannONets%3A Interpretable Neural Operators for Riemann Problems