Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges

Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges

May 4, 2021 | Michael M. Bronstein, Joan Bruna, Taco Cohen, Petar Veličković
This text introduces the concept of Geometric Deep Learning, which applies the principles of the Erlangen Programme to deep learning. The Erlangen Programme, proposed by Felix Klein, emphasized the study of invariants under symmetry transformations, which has inspired the development of geometric deep learning models that respect the structure and symmetries of data domains. The text discusses the challenges of learning in high dimensions, the importance of geometric priors, and the role of symmetry, scale separation, and invariance in deep learning. It outlines the five key geometric domains—graphs, grids, groups, geodesics, and gauges—and explores various deep learning models that operate on these domains, including convolutional neural networks, group-equivariant CNNs, graph neural networks, and intrinsic mesh CNNs. The text also highlights the importance of geometric principles in understanding and improving deep learning models, and emphasizes the need for a unified framework that incorporates these principles to better understand and apply deep learning in various domains. The authors argue that geometric deep learning provides a principled way to incorporate prior knowledge into neural architectures and to build future models. The text is intended for a broad audience of deep learning researchers, practitioners, and enthusiasts, and aims to provide a systematic overview of the field and its applications.This text introduces the concept of Geometric Deep Learning, which applies the principles of the Erlangen Programme to deep learning. The Erlangen Programme, proposed by Felix Klein, emphasized the study of invariants under symmetry transformations, which has inspired the development of geometric deep learning models that respect the structure and symmetries of data domains. The text discusses the challenges of learning in high dimensions, the importance of geometric priors, and the role of symmetry, scale separation, and invariance in deep learning. It outlines the five key geometric domains—graphs, grids, groups, geodesics, and gauges—and explores various deep learning models that operate on these domains, including convolutional neural networks, group-equivariant CNNs, graph neural networks, and intrinsic mesh CNNs. The text also highlights the importance of geometric principles in understanding and improving deep learning models, and emphasizes the need for a unified framework that incorporates these principles to better understand and apply deep learning in various domains. The authors argue that geometric deep learning provides a principled way to incorporate prior knowledge into neural architectures and to build future models. The text is intended for a broad audience of deep learning researchers, practitioners, and enthusiasts, and aims to provide a systematic overview of the field and its applications.
Reach us at info@study.space
[slides and audio] Geometric Deep Learning%3A Grids%2C Groups%2C Graphs%2C Geodesics%2C and Gauges