AN OPERATOR LEARNING PERSPECTIVE ON PARAMETER-TO-OBSERVABLE MAPS

AN OPERATOR LEARNING PERSPECTIVE ON PARAMETER-TO-OBSERVABLE MAPS

6 Jun 2024 | DANIEL ZHENGYU HUANG, NICHOLAS H. NELSEN, MARGARET TRAUTNER
The paper introduces Fourier Neural Mappings (FNM), a framework that extends Fourier Neural Operators (FNO) to accommodate finite-dimensional vector inputs and outputs. FNM is designed to handle parameter-to-observable (PfO) maps, which are often defined implicitly through infinite-dimensional operators like partial differential equations. The authors develop universal approximation theorems for FNM and analyze the sample complexity of learning linear functionals under smoothness misspecification. They show that full-field learning of linear functionals, which factorize into a linear QoI and a linear operator, can be more data-efficient than end-to-end learning in certain regimes. Numerical experiments with FNM in three nonlinear PfO problems demonstrate the benefits of the operator learning perspective. The paper also provides theoretical insights into the efficiency of different learning approaches and compares them with standard fully-connected neural networks.The paper introduces Fourier Neural Mappings (FNM), a framework that extends Fourier Neural Operators (FNO) to accommodate finite-dimensional vector inputs and outputs. FNM is designed to handle parameter-to-observable (PfO) maps, which are often defined implicitly through infinite-dimensional operators like partial differential equations. The authors develop universal approximation theorems for FNM and analyze the sample complexity of learning linear functionals under smoothness misspecification. They show that full-field learning of linear functionals, which factorize into a linear QoI and a linear operator, can be more data-efficient than end-to-end learning in certain regimes. Numerical experiments with FNM in three nonlinear PfO problems demonstrate the benefits of the operator learning perspective. The paper also provides theoretical insights into the efficiency of different learning approaches and compares them with standard fully-connected neural networks.
Reach us at info@study.space
[slides] An operator learning perspective on parameter-to-observable maps | StudySpace