6 Dec 2016 | Federico Monti1*, Davide Boscaini1*, Jonathan Masci1,4 Emanuele Rodolà1 Jan Svoboda1 Michael M. Bronstein1,2,3
This paper introduces a unified framework for geometric deep learning on non-Euclidean domains such as graphs and manifolds, extending the capabilities of traditional convolutional neural networks (CNNs). The proposed framework, named Mixture Model Networks (MoNet), allows for the design of convolutional architectures that can learn local, stationary, and compositional task-specific features on these non-Euclidean structures. MoNet generalizes various existing methods, including Geodesic CNNs (GCNN) and Anisotropic CNNs (ACNN) on manifolds, and Graph Convolutional Networks (GCN) and Deep Convolutional Graph Networks (DCNN) on graphs. The key innovation lies in the parametric construction of patch operators using Gaussian kernels, which are learned to match local intrinsic patches on graphs or manifolds. This approach ensures that the learned filters are domain-invariant, allowing for better generalization across different graphs or manifolds.
The paper demonstrates the effectiveness of MoNet through experiments on image classification, vertex classification on graphs, and dense intrinsic correspondence between 3D shapes. In image classification tasks, MoNet outperforms spectral CNNs and other non-Euclidean deep learning methods, particularly when dealing with varying graph structures. For vertex classification on graphs, MoNet shows superior performance compared to GCN and DCNN, especially on smaller graphs with significant variations. In the 3D shape correspondence task, MoNet achieves high accuracy and deformation-invariance, outperforming competing methods.
The main contributions of the paper include the introduction of MoNet, a flexible and powerful framework for geometric deep learning, and the validation of its effectiveness through comprehensive experiments. The framework's ability to handle non-Euclidean data and its intrinsic nature make it particularly suitable for applications in computational social sciences and other fields where data is naturally structured on graphs or manifolds.This paper introduces a unified framework for geometric deep learning on non-Euclidean domains such as graphs and manifolds, extending the capabilities of traditional convolutional neural networks (CNNs). The proposed framework, named Mixture Model Networks (MoNet), allows for the design of convolutional architectures that can learn local, stationary, and compositional task-specific features on these non-Euclidean structures. MoNet generalizes various existing methods, including Geodesic CNNs (GCNN) and Anisotropic CNNs (ACNN) on manifolds, and Graph Convolutional Networks (GCN) and Deep Convolutional Graph Networks (DCNN) on graphs. The key innovation lies in the parametric construction of patch operators using Gaussian kernels, which are learned to match local intrinsic patches on graphs or manifolds. This approach ensures that the learned filters are domain-invariant, allowing for better generalization across different graphs or manifolds.
The paper demonstrates the effectiveness of MoNet through experiments on image classification, vertex classification on graphs, and dense intrinsic correspondence between 3D shapes. In image classification tasks, MoNet outperforms spectral CNNs and other non-Euclidean deep learning methods, particularly when dealing with varying graph structures. For vertex classification on graphs, MoNet shows superior performance compared to GCN and DCNN, especially on smaller graphs with significant variations. In the 3D shape correspondence task, MoNet achieves high accuracy and deformation-invariance, outperforming competing methods.
The main contributions of the paper include the introduction of MoNet, a flexible and powerful framework for geometric deep learning, and the validation of its effectiveness through comprehensive experiments. The framework's ability to handle non-Euclidean data and its intrinsic nature make it particularly suitable for applications in computational social sciences and other fields where data is naturally structured on graphs or manifolds.