AN OPERATOR LEARNING PERSPECTIVE ON PARAMETER-TO-OBSERVABLE MAPS

AN OPERATOR LEARNING PERSPECTIVE ON PARAMETER-TO-OBSERVABLE MAPS

6 Jun 2024 | DANIEL ZHENGYU HUANG, NICHOLAS H. NELSEN, AND MARGARET TRAUTNER
This paper introduces Fourier Neural Mappings (FNMs) as a framework for learning parameter-to-observable (PtO) maps in scientific machine learning. FNMs extend neural operators to handle finite-dimensional input and output spaces while maintaining the ability to learn maps between infinite-dimensional function spaces. The paper develops universal approximation theorems for FNMs and analyzes the sample complexity of end-to-end versus full-field learning approaches for linear function-to-scalar maps. It shows that full-field learning can be more data-efficient than end-to-end learning in certain regimes. The paper also presents numerical experiments demonstrating the effectiveness of FNMs in approximating nonlinear PtO maps for three different problems: an advection–diffusion equation, flow over an airfoil, and an elliptic homogenization problem. Theoretical analysis of Bayesian nonparametric regression of linear functionals suggests that full-field learning can achieve better sample complexity than end-to-end learning under certain conditions. The paper also discusses the relationship between FNMs and other neural operator architectures, such as the Fourier Neural Operator (FNO), and highlights the importance of universal approximation properties in ensuring the effectiveness of these models. The results demonstrate that FNMs provide a flexible and efficient framework for learning PtO maps in scientific machine learning applications.This paper introduces Fourier Neural Mappings (FNMs) as a framework for learning parameter-to-observable (PtO) maps in scientific machine learning. FNMs extend neural operators to handle finite-dimensional input and output spaces while maintaining the ability to learn maps between infinite-dimensional function spaces. The paper develops universal approximation theorems for FNMs and analyzes the sample complexity of end-to-end versus full-field learning approaches for linear function-to-scalar maps. It shows that full-field learning can be more data-efficient than end-to-end learning in certain regimes. The paper also presents numerical experiments demonstrating the effectiveness of FNMs in approximating nonlinear PtO maps for three different problems: an advection–diffusion equation, flow over an airfoil, and an elliptic homogenization problem. Theoretical analysis of Bayesian nonparametric regression of linear functionals suggests that full-field learning can achieve better sample complexity than end-to-end learning under certain conditions. The paper also discusses the relationship between FNMs and other neural operator architectures, such as the Fourier Neural Operator (FNO), and highlights the importance of universal approximation properties in ensuring the effectiveness of these models. The results demonstrate that FNMs provide a flexible and efficient framework for learning PtO maps in scientific machine learning applications.
Reach us at info@study.space
[slides] An operator learning perspective on parameter-to-observable maps | StudySpace