Limits of Depth: Over-Smoothing and Over-Squashing in GNNs

Limits of Depth: Over-Smoothing and Over-Squashing in GNNs

March 2024 | Aafaq Mohi ud din* and Shaima Qureshi
This paper investigates the challenges of depth in Graph Neural Networks (GNNs), focusing on over-smoothing and over-squashing. The study explores how increasing depth in GNNs can lead to over-smoothing, where node representations become too similar, and over-squashing, where information is compressed into fixed-size vectors, leading to loss of detail. The research also examines the impact of node degrees on classification accuracy, finding that nodes with low degrees are more challenging to classify. The study compares isotropic and anisotropic GNNs, with anisotropic models showing better performance due to their ability to focus on relevant neighbors through attention mechanisms. Experiments on various datasets show that anisotropic GNNs are more effective in handling deep architectures and reducing over-squashing. The study also highlights the scalability of different GNN models and the trade-off between depth and performance. The findings suggest that attention-based models, while effective, may not scale well to large datasets due to computational complexity. The research provides insights into the design of deep GNNs and offers directions for future improvements in handling graph-structured data.This paper investigates the challenges of depth in Graph Neural Networks (GNNs), focusing on over-smoothing and over-squashing. The study explores how increasing depth in GNNs can lead to over-smoothing, where node representations become too similar, and over-squashing, where information is compressed into fixed-size vectors, leading to loss of detail. The research also examines the impact of node degrees on classification accuracy, finding that nodes with low degrees are more challenging to classify. The study compares isotropic and anisotropic GNNs, with anisotropic models showing better performance due to their ability to focus on relevant neighbors through attention mechanisms. Experiments on various datasets show that anisotropic GNNs are more effective in handling deep architectures and reducing over-squashing. The study also highlights the scalability of different GNN models and the trade-off between depth and performance. The findings suggest that attention-based models, while effective, may not scale well to large datasets due to computational complexity. The research provides insights into the design of deep GNNs and offers directions for future improvements in handling graph-structured data.
Reach us at info@study.space
[slides] Limits of Depth%3A Over-Smoothing and Over-Squashing in GNNs | StudySpace