Understanding Transformer Reasoning Capabilities via Graph Algorithms

Understanding Transformer Reasoning Capabilities via Graph Algorithms

28 May 2024 | Clayton Sanford, Bahare Fatemi, Ethan Hall, Anton Tsitsulin, Mehran Kazemi, Jonathan Halcrow, Bryan Perozzi, and Vahab Mirrokni
This paper investigates the algorithmic reasoning capabilities of transformers, particularly in solving graph-related problems. The authors introduce a novel representational hierarchy that categorizes 9 graph reasoning tasks into three classes: retrieval, parallelizable, and search tasks. They prove that logarithmic-depth transformers are necessary and sufficient for solving parallelizable tasks, while single-layer transformers with small embedding dimensions can solve retrieval tasks. The study also highlights the exceptional performance of transformers in global graph reasoning tasks, such as connectivity and shortest path, outperforming specialized graph neural networks (GNNs) in certain scenarios. Empirical results using the GraphQA benchmark support these theoretical findings, demonstrating that transformers excel in tasks requiring long-range dependencies. The paper concludes by discussing the implications of these findings and suggesting future research directions.This paper investigates the algorithmic reasoning capabilities of transformers, particularly in solving graph-related problems. The authors introduce a novel representational hierarchy that categorizes 9 graph reasoning tasks into three classes: retrieval, parallelizable, and search tasks. They prove that logarithmic-depth transformers are necessary and sufficient for solving parallelizable tasks, while single-layer transformers with small embedding dimensions can solve retrieval tasks. The study also highlights the exceptional performance of transformers in global graph reasoning tasks, such as connectivity and shortest path, outperforming specialized graph neural networks (GNNs) in certain scenarios. Empirical results using the GraphQA benchmark support these theoretical findings, demonstrating that transformers excel in tasks requiring long-range dependencies. The paper concludes by discussing the implications of these findings and suggesting future research directions.
Reach us at info@study.space