15 Apr 2020 | Lu Lu1, Pengzhan Jin2, and George Em Karniadakis1
The paper introduces DeepONets, a novel architecture for learning nonlinear operators from data. DeepONets are designed to address the challenge of approximating nonlinear continuous operators using neural networks, which have been shown to be universal approximators of continuous functions. The authors propose two sub-networks: a branch net for encoding input functions at a fixed number of sensors and a trunk net for encoding the locations of output functions. This architecture is demonstrated to significantly reduce generalization errors compared to fully connected networks (FNNs) in identifying both dynamic systems and partial differential equations (PDEs). The paper also provides theoretical analysis on the number of sensors required for accurate representation and derives the dependence of approximation error on various factors. Computational results show high-order error convergence, including polynomial rates and even exponential convergence with respect to the training dataset size. The effectiveness of DeepONets is validated through simulations on various problems, highlighting their superior performance in terms of accuracy and generalization.The paper introduces DeepONets, a novel architecture for learning nonlinear operators from data. DeepONets are designed to address the challenge of approximating nonlinear continuous operators using neural networks, which have been shown to be universal approximators of continuous functions. The authors propose two sub-networks: a branch net for encoding input functions at a fixed number of sensors and a trunk net for encoding the locations of output functions. This architecture is demonstrated to significantly reduce generalization errors compared to fully connected networks (FNNs) in identifying both dynamic systems and partial differential equations (PDEs). The paper also provides theoretical analysis on the number of sensors required for accurate representation and derives the dependence of approximation error on various factors. Computational results show high-order error convergence, including polynomial rates and even exponential convergence with respect to the training dataset size. The effectiveness of DeepONets is validated through simulations on various problems, highlighting their superior performance in terms of accuracy and generalization.