28 February 2024 | Francesco Regazzoni, Stefano Pagani, Matteo Salvador, Luca Dede', Alfio Quarteroni
This article introduces Latent Dynamics Networks (LDNets), a data-driven approach for learning the intrinsic dynamics of spatio-temporal processes. LDNets automatically discover low-dimensional latent representations of system states without requiring explicit encoding or high-dimensional discretization. They predict the evolution of space-dependent fields without predefined grids, enabling weight-sharing across query points. LDNets demonstrate superior accuracy (5 times smaller normalized error) and fewer trainable parameters (over 10 times fewer) compared to state-of-the-art methods, particularly in highly nonlinear problems.
Traditional physics-based models, such as PDEs and SDEs, are limited by computational costs and the need for deep physical understanding. Data-driven methods, enabled by advances in machine learning, offer an alternative by learning models directly from data or as surrogates for high-fidelity models. LDNets excel in many-query scenarios and real-time applications, such as clinical settings.
The paper presents several test cases demonstrating LDNets' effectiveness. In Test Case 1, LDNets accurately predict the evolution of a linear advection-diffusion-reaction equation with varying input parameters. In Test Case 2, LDNets successfully model unsteady Navier-Stokes equations, showing robustness in time-extrapolation. In Test Case 3, LDNets outperform auto-encoder-based methods and the POD-DEIM method in predicting the dynamics of a 1D electrophysiology model, achieving significantly lower error with fewer parameters. In Test Case 4, LDNets accurately capture reentrant activity in a 2D electrophysiology model, demonstrating their ability to handle complex spatial patterns.
LDNets offer several advantages over traditional methods: they operate in a low-dimensional space without high-dimensional discretization, enable weight-sharing across query points, and provide continuous output representations. They also allow for the incorporation of physics-informed terms into the loss function and support stochastic training methods. The recurrent architecture of LDNets is consistent with the arrow of time, distinguishing them from other approaches that treat time as a parameter or require fixed-length input histories.
The results show that LDNets achieve high accuracy with significantly fewer parameters than other methods, making them a promising tool for learning spatio-temporal dynamics in complex systems. They are particularly effective in scenarios with limited training data and for time-extrapolation, where traditional methods struggle. The paper concludes that LDNets represent an innovative and efficient approach for learning spatio-temporal dynamics in a data-driven manner.This article introduces Latent Dynamics Networks (LDNets), a data-driven approach for learning the intrinsic dynamics of spatio-temporal processes. LDNets automatically discover low-dimensional latent representations of system states without requiring explicit encoding or high-dimensional discretization. They predict the evolution of space-dependent fields without predefined grids, enabling weight-sharing across query points. LDNets demonstrate superior accuracy (5 times smaller normalized error) and fewer trainable parameters (over 10 times fewer) compared to state-of-the-art methods, particularly in highly nonlinear problems.
Traditional physics-based models, such as PDEs and SDEs, are limited by computational costs and the need for deep physical understanding. Data-driven methods, enabled by advances in machine learning, offer an alternative by learning models directly from data or as surrogates for high-fidelity models. LDNets excel in many-query scenarios and real-time applications, such as clinical settings.
The paper presents several test cases demonstrating LDNets' effectiveness. In Test Case 1, LDNets accurately predict the evolution of a linear advection-diffusion-reaction equation with varying input parameters. In Test Case 2, LDNets successfully model unsteady Navier-Stokes equations, showing robustness in time-extrapolation. In Test Case 3, LDNets outperform auto-encoder-based methods and the POD-DEIM method in predicting the dynamics of a 1D electrophysiology model, achieving significantly lower error with fewer parameters. In Test Case 4, LDNets accurately capture reentrant activity in a 2D electrophysiology model, demonstrating their ability to handle complex spatial patterns.
LDNets offer several advantages over traditional methods: they operate in a low-dimensional space without high-dimensional discretization, enable weight-sharing across query points, and provide continuous output representations. They also allow for the incorporation of physics-informed terms into the loss function and support stochastic training methods. The recurrent architecture of LDNets is consistent with the arrow of time, distinguishing them from other approaches that treat time as a parameter or require fixed-length input histories.
The results show that LDNets achieve high accuracy with significantly fewer parameters than other methods, making them a promising tool for learning spatio-temporal dynamics in complex systems. They are particularly effective in scenarios with limited training data and for time-extrapolation, where traditional methods struggle. The paper concludes that LDNets represent an innovative and efficient approach for learning spatio-temporal dynamics in a data-driven manner.