March 2024 | Aafaq Mohi ud din* and Shaima Qureshi
The paper "Limits of Depth: Over-Smoothing and Over-Squashing in GNNs" by Aafaq Mohi ud din and Shaima Qureshi explores the impact of depth on the performance of Graph Neural Networks (GNNs), particularly focusing on isotropic and anisotropic models. The authors investigate the trade-off between depth and performance, highlighting that increasing depth can lead to over-smoothing and a decrease in performance due to the bottleneck effect. They also examine the impact of node degrees on classification accuracy, finding that nodes with low degrees pose challenges for accurate classification. The study uses benchmark datasets and evaluates various GNN models, including isotropic and anisotropic GNNs, to understand their scalability and performance. Key findings include the effectiveness of anisotropic models in mitigating over-smoothing and the importance of considering node degrees in classification tasks. The research provides valuable insights for designing deeper GNNs and suggests potential avenues for future improvements.The paper "Limits of Depth: Over-Smoothing and Over-Squashing in GNNs" by Aafaq Mohi ud din and Shaima Qureshi explores the impact of depth on the performance of Graph Neural Networks (GNNs), particularly focusing on isotropic and anisotropic models. The authors investigate the trade-off between depth and performance, highlighting that increasing depth can lead to over-smoothing and a decrease in performance due to the bottleneck effect. They also examine the impact of node degrees on classification accuracy, finding that nodes with low degrees pose challenges for accurate classification. The study uses benchmark datasets and evaluates various GNN models, including isotropic and anisotropic GNNs, to understand their scalability and performance. Key findings include the effectiveness of anisotropic models in mitigating over-smoothing and the importance of considering node degrees in classification tasks. The research provides valuable insights for designing deeper GNNs and suggests potential avenues for future improvements.