2021 | Victor Garcia Satorras, Emiel Hoogeboom, Max Welling
This paper introduces E(n)-Equivariant Graph Neural Networks (EGNNs), a new model that learns graph neural networks equivariant to rotations, translations, reflections, and permutations. Unlike existing methods, EGNNs achieve competitive or better performance without requiring computationally expensive higher-order representations in intermediate layers. The model is easily scalable to higher-dimensional spaces and is effective in tasks such as dynamical systems modelling, graph autoencoder representation learning, and predicting molecular properties.
EGNNs are designed to be equivariant to transformations in the Euclidean group E(n), which includes translations, rotations, reflections, and permutations. The model preserves equivariance for these transformations by incorporating coordinate embeddings and node embeddings, and by updating the positions of particles in a way that respects the symmetry of the transformations. The model also supports vector type representations, allowing for the tracking of particle momentum.
The paper compares EGNNs with existing methods such as Graph Neural Networks (GNNs), Tensor Field Networks (TFN), and the SE(3) Transformer. EGNNs outperform these methods in terms of performance and efficiency, particularly in tasks involving 3D point clouds and molecular property prediction. The model is shown to be effective in various experiments, including dynamical system modelling, graph autoencoder tasks, and molecular property prediction on the QM9 dataset.
EGNNs are also shown to be effective in handling graph autoencoding tasks, where they outperform standard GNNs by maintaining equivariance to transformations in the input data. The model is able to reconstruct adjacency matrices accurately, even in the presence of symmetry issues, by introducing noise to the input data and using an equivariant transformation to maintain the symmetry properties.
In molecular property prediction, EGNNs are shown to perform well on the QM9 dataset, achieving competitive results while maintaining simplicity and avoiding the use of higher-order representations. The model is able to capture the geometric properties of molecules using only positional information, which is invariant to transformations in the Euclidean group.
Overall, EGNNs provide a flexible and efficient approach to learning graph neural networks that are equivariant to transformations in the Euclidean group, making them suitable for a wide range of applications in machine learning and data analysis.This paper introduces E(n)-Equivariant Graph Neural Networks (EGNNs), a new model that learns graph neural networks equivariant to rotations, translations, reflections, and permutations. Unlike existing methods, EGNNs achieve competitive or better performance without requiring computationally expensive higher-order representations in intermediate layers. The model is easily scalable to higher-dimensional spaces and is effective in tasks such as dynamical systems modelling, graph autoencoder representation learning, and predicting molecular properties.
EGNNs are designed to be equivariant to transformations in the Euclidean group E(n), which includes translations, rotations, reflections, and permutations. The model preserves equivariance for these transformations by incorporating coordinate embeddings and node embeddings, and by updating the positions of particles in a way that respects the symmetry of the transformations. The model also supports vector type representations, allowing for the tracking of particle momentum.
The paper compares EGNNs with existing methods such as Graph Neural Networks (GNNs), Tensor Field Networks (TFN), and the SE(3) Transformer. EGNNs outperform these methods in terms of performance and efficiency, particularly in tasks involving 3D point clouds and molecular property prediction. The model is shown to be effective in various experiments, including dynamical system modelling, graph autoencoder tasks, and molecular property prediction on the QM9 dataset.
EGNNs are also shown to be effective in handling graph autoencoding tasks, where they outperform standard GNNs by maintaining equivariance to transformations in the input data. The model is able to reconstruct adjacency matrices accurately, even in the presence of symmetry issues, by introducing noise to the input data and using an equivariant transformation to maintain the symmetry properties.
In molecular property prediction, EGNNs are shown to perform well on the QM9 dataset, achieving competitive results while maintaining simplicity and avoiding the use of higher-order representations. The model is able to capture the geometric properties of molecules using only positional information, which is invariant to transformations in the Euclidean group.
Overall, EGNNs provide a flexible and efficient approach to learning graph neural networks that are equivariant to transformations in the Euclidean group, making them suitable for a wide range of applications in machine learning and data analysis.