2021 | Victor Garcia Satorras, Emiel Hoogeboom, Max Welling
This paper introduces E(n)-Equivariant Graph Neural Networks (EGGNNs), a new model that learns graph neural networks equivariant to rotations, translations, reflections, and permutations. Unlike existing methods, EGGNNs do not require computationally expensive higher-order representations in intermediate layers while achieving competitive or better performance. The model is also scalable to higher-dimensional spaces, unlike methods limited to 3D spaces. The effectiveness of EGGNNs is demonstrated in dynamical systems modeling, representation learning in graph autoencoders, and predicting molecular properties. The paper provides a detailed analysis of the model's equivariance properties and compares it with other methods, showing superior performance in various experiments.This paper introduces E(n)-Equivariant Graph Neural Networks (EGGNNs), a new model that learns graph neural networks equivariant to rotations, translations, reflections, and permutations. Unlike existing methods, EGGNNs do not require computationally expensive higher-order representations in intermediate layers while achieving competitive or better performance. The model is also scalable to higher-dimensional spaces, unlike methods limited to 3D spaces. The effectiveness of EGGNNs is demonstrated in dynamical systems modeling, representation learning in graph autoencoders, and predicting molecular properties. The paper provides a detailed analysis of the model's equivariance properties and compares it with other methods, showing superior performance in various experiments.