5 Sep 2019 | Sam Greydanus, Misko Dzamba, Jason Yosinski
Hamiltonian Neural Networks (HNNs) are a type of neural network inspired by Hamiltonian mechanics, which is a branch of physics concerned with conservation laws and invariances. The goal of HNNs is to enable neural networks to learn and respect exact conservation laws in an unsupervised manner. This approach is particularly useful for tasks where conservation of energy is important, such as the two-body problem and pixel observations of a pendulum. HNNs are trained to learn exact conservation laws by parameterizing the Hamiltonian with a neural network and learning it directly from data. This method allows HNNs to generalize better and train faster than regular neural networks. An interesting side effect is that HNNs are perfectly reversible in time.
In the paper, the authors evaluate HNNs on three simple physics tasks: an ideal mass-spring system, an ideal pendulum, and a real pendulum. The results show that HNNs outperform baseline models in terms of energy conservation and generalization. Additionally, HNNs are shown to be effective in modeling larger systems, such as the two-body problem, and even in learning Hamiltonians from pixel data, such as in the case of a pixel pendulum. The HNNs are able to conserve a quantity that closely resembles total energy, even in the presence of noise and friction.
The paper also discusses the useful properties of HNNs, such as perfect reversibility and the ability to add or remove energy from the system. These properties make HNNs a promising approach for combining the strengths of physics-based models with the flexibility of neural networks. The authors conclude that HNNs represent a promising way to bring together the strengths of both approaches, offering a new method for learning physical laws from data.Hamiltonian Neural Networks (HNNs) are a type of neural network inspired by Hamiltonian mechanics, which is a branch of physics concerned with conservation laws and invariances. The goal of HNNs is to enable neural networks to learn and respect exact conservation laws in an unsupervised manner. This approach is particularly useful for tasks where conservation of energy is important, such as the two-body problem and pixel observations of a pendulum. HNNs are trained to learn exact conservation laws by parameterizing the Hamiltonian with a neural network and learning it directly from data. This method allows HNNs to generalize better and train faster than regular neural networks. An interesting side effect is that HNNs are perfectly reversible in time.
In the paper, the authors evaluate HNNs on three simple physics tasks: an ideal mass-spring system, an ideal pendulum, and a real pendulum. The results show that HNNs outperform baseline models in terms of energy conservation and generalization. Additionally, HNNs are shown to be effective in modeling larger systems, such as the two-body problem, and even in learning Hamiltonians from pixel data, such as in the case of a pixel pendulum. The HNNs are able to conserve a quantity that closely resembles total energy, even in the presence of noise and friction.
The paper also discusses the useful properties of HNNs, such as perfect reversibility and the ability to add or remove energy from the system. These properties make HNNs a promising approach for combining the strengths of physics-based models with the flexibility of neural networks. The authors conclude that HNNs represent a promising way to bring together the strengths of both approaches, offering a new method for learning physical laws from data.