28 February 2024 | Francesco Regazzoni, Stefano Pagani, Matteo Salvador, Luca Dede, Alfio Quarteroni
The paper introduces Latent Dynamics Networks (LDNets), a novel architecture designed to uncover low-dimensional intrinsic dynamics in non-Markovian systems. LDNets automatically discover a low-dimensional manifold while learning system dynamics, eliminating the need for an auto-encoder and avoiding high-dimensional operations. They predict the evolution of space-dependent fields without relying on predetermined grids, enabling weight-sharing across query points. LDNets are lightweight, easy to train, and demonstrate superior accuracy (normalized error 5 times smaller) in highly nonlinear problems with significantly fewer trainable parameters compared to state-of-the-art methods.
LDNets consist of two sub-networks: a dynamics network ($\mathcal{NN}_{\text{dyn}}$) and a reconstruction network ($\mathcal{NN}_{\text{rec}}$). $\mathcal{NN}_{\text{dyn}}$ evolves the dynamics of latent variables, while $\mathcal{NN}_{\text{rec}}$ reconstructs the output field. The architecture is trained via empirical risk minimization, with Tikhonov regularization to prevent overfitting. LDNets are tested on various benchmark problems, including linear PDE models, fluid dynamics, and a nonlinear excitation-propagation PDE model used in cardiac electrophysiology. Results show that LDNets achieve excellent accuracy, generalization capabilities, and efficient time-extrapolation, outperforming other methods in terms of prediction accuracy and computational efficiency.The paper introduces Latent Dynamics Networks (LDNets), a novel architecture designed to uncover low-dimensional intrinsic dynamics in non-Markovian systems. LDNets automatically discover a low-dimensional manifold while learning system dynamics, eliminating the need for an auto-encoder and avoiding high-dimensional operations. They predict the evolution of space-dependent fields without relying on predetermined grids, enabling weight-sharing across query points. LDNets are lightweight, easy to train, and demonstrate superior accuracy (normalized error 5 times smaller) in highly nonlinear problems with significantly fewer trainable parameters compared to state-of-the-art methods.
LDNets consist of two sub-networks: a dynamics network ($\mathcal{NN}_{\text{dyn}}$) and a reconstruction network ($\mathcal{NN}_{\text{rec}}$). $\mathcal{NN}_{\text{dyn}}$ evolves the dynamics of latent variables, while $\mathcal{NN}_{\text{rec}}$ reconstructs the output field. The architecture is trained via empirical risk minimization, with Tikhonov regularization to prevent overfitting. LDNets are tested on various benchmark problems, including linear PDE models, fluid dynamics, and a nonlinear excitation-propagation PDE model used in cardiac electrophysiology. Results show that LDNets achieve excellent accuracy, generalization capabilities, and efficient time-extrapolation, outperforming other methods in terms of prediction accuracy and computational efficiency.