Hidden Physics Models: Machine Learning of Nonlinear Partial Differential Equations

Hidden Physics Models: Machine Learning of Nonlinear Partial Differential Equations

August 23, 2017 | Maziar Raissi and George Em Karniadakis
This paper introduces a new approach for learning partial differential equations (PDEs) from small data sets, called hidden physics models (HPMs). HPMs are data-efficient learning machines that leverage the underlying physical laws, expressed by time-dependent and nonlinear PDEs, to extract patterns from high-dimensional data. The method uses Gaussian processes, a powerful tool for probabilistic inference over functions, to balance model complexity and data fitting. The effectiveness of HPMs is demonstrated through various canonical problems in scientific domains, including Navier-Stokes, Schrödinger, Kuramoto-Sivashinsky, and time-dependent linear fractional equations. The framework is applied to the problem of learning, system identification, or data-driven discovery of PDEs. The methodology involves using noisy measurements of the system to infer the solution of two distinct problems: inference or filtering and smoothing, and learning, system identification, or data-driven discovery of PDEs. The hyper-parameters of the covariance functions and the parameters of the operators are learned by minimizing the negative log marginal likelihood using a Quasi-Newton optimizer. The results show that HPMs can accurately identify the correct parameter values for various PDEs, even with limited data. The performance of the algorithm is enhanced by more data, less noise, and a smaller time gap between snapshots. The method is tested on several PDEs, including the Burgers' equation, KdV equation, Kuramoto-Sivashinsky equation, nonlinear Schrödinger equation, and Navier-Stokes equations. The results demonstrate that HPMs can effectively learn the parameters of these equations from noisy data, even when the data is sparse. The paper also discusses the limitations of the current approach, including the cubic scaling with respect to the number of training data points. However, the authors suggest that ideas such as recursive Kalman updates, variational inference, and parametric Gaussian processes can be used to address this limitation. The method is compared to previous work, and it is noted that the current approach can estimate parameters appearing anywhere in the formulation of the PDE, while previous methods are limited to parameters appearing as coefficients. Additionally, the current approach can automatically filter arbitrarily noisy data via Gaussian process prior assumptions, whereas previous methods require more complex noise treatment. The authors believe that both methods can be used in different contexts effectively, and that this is only the beginning of a new way of thinking and formulating new and possibly simpler equations.This paper introduces a new approach for learning partial differential equations (PDEs) from small data sets, called hidden physics models (HPMs). HPMs are data-efficient learning machines that leverage the underlying physical laws, expressed by time-dependent and nonlinear PDEs, to extract patterns from high-dimensional data. The method uses Gaussian processes, a powerful tool for probabilistic inference over functions, to balance model complexity and data fitting. The effectiveness of HPMs is demonstrated through various canonical problems in scientific domains, including Navier-Stokes, Schrödinger, Kuramoto-Sivashinsky, and time-dependent linear fractional equations. The framework is applied to the problem of learning, system identification, or data-driven discovery of PDEs. The methodology involves using noisy measurements of the system to infer the solution of two distinct problems: inference or filtering and smoothing, and learning, system identification, or data-driven discovery of PDEs. The hyper-parameters of the covariance functions and the parameters of the operators are learned by minimizing the negative log marginal likelihood using a Quasi-Newton optimizer. The results show that HPMs can accurately identify the correct parameter values for various PDEs, even with limited data. The performance of the algorithm is enhanced by more data, less noise, and a smaller time gap between snapshots. The method is tested on several PDEs, including the Burgers' equation, KdV equation, Kuramoto-Sivashinsky equation, nonlinear Schrödinger equation, and Navier-Stokes equations. The results demonstrate that HPMs can effectively learn the parameters of these equations from noisy data, even when the data is sparse. The paper also discusses the limitations of the current approach, including the cubic scaling with respect to the number of training data points. However, the authors suggest that ideas such as recursive Kalman updates, variational inference, and parametric Gaussian processes can be used to address this limitation. The method is compared to previous work, and it is noted that the current approach can estimate parameters appearing anywhere in the formulation of the PDE, while previous methods are limited to parameters appearing as coefficients. Additionally, the current approach can automatically filter arbitrarily noisy data via Gaussian process prior assumptions, whereas previous methods require more complex noise treatment. The authors believe that both methods can be used in different contexts effectively, and that this is only the beginning of a new way of thinking and formulating new and possibly simpler equations.
Reach us at info@study.space
Understanding Hidden physics models%3A Machine learning of nonlinear partial differential equations