Group Equivariant Convolutional Networks

Group Equivariant Convolutional Networks

3 Jun 2016 | Taco S. Cohen, Max Welling
The paper introduces Group-equivariant Convolutional Neural Networks (G-CNNs), a generalization of convolutional neural networks that leverages symmetries to reduce sample complexity. G-CNNs use G-convolutions, which are layers that enjoy a higher degree of weight sharing compared to regular convolution layers, increasing the network's expressive capacity without increasing the number of parameters. G-convolutions are designed to be equivariant to various transformations, such as translations, rotations, and reflections, making them suitable for tasks with symmetry. The paper discusses the mathematical framework for G-CNNs, including the definition of symmetry groups and the transformation properties of functions on these groups. It also presents the implementation details for G-convolutions and demonstrates their effectiveness through experiments on rotated MNIST and CIFAR10 datasets, achieving state-of-the-art results. The authors show that G-convolutions can be used as drop-in replacements for standard convolutions in modern network architectures, improving performance without additional tuning.The paper introduces Group-equivariant Convolutional Neural Networks (G-CNNs), a generalization of convolutional neural networks that leverages symmetries to reduce sample complexity. G-CNNs use G-convolutions, which are layers that enjoy a higher degree of weight sharing compared to regular convolution layers, increasing the network's expressive capacity without increasing the number of parameters. G-convolutions are designed to be equivariant to various transformations, such as translations, rotations, and reflections, making them suitable for tasks with symmetry. The paper discusses the mathematical framework for G-CNNs, including the definition of symmetry groups and the transformation properties of functions on these groups. It also presents the implementation details for G-convolutions and demonstrates their effectiveness through experiments on rotated MNIST and CIFAR10 datasets, achieving state-of-the-art results. The authors show that G-convolutions can be used as drop-in replacements for standard convolutions in modern network architectures, improving performance without additional tuning.
Reach us at info@study.space
[slides and audio] Group Equivariant Convolutional Networks