The paper introduces the Graph Multi-Attention Network (GMAN), a novel approach for long-term traffic prediction on road network graphs. GMAN addresses the challenges of complex spatio-temporal correlations and error propagation by incorporating spatial and temporal attention mechanisms, gated fusion, and a transform attention layer. The encoder-decoder architecture of GMAN encodes historical traffic features and predicts future traffic conditions, with the transform attention layer converting encoded features to generate future representations. Experimental results on two real-world datasets (Xiamen and PeMS) demonstrate that GMAN outperforms state-of-the-art methods, achieving up to 4% improvement in Mean Absolute Error (MAE) for 1-hour ahead predictions. The model also shows superior fault-tolerance and computational efficiency.The paper introduces the Graph Multi-Attention Network (GMAN), a novel approach for long-term traffic prediction on road network graphs. GMAN addresses the challenges of complex spatio-temporal correlations and error propagation by incorporating spatial and temporal attention mechanisms, gated fusion, and a transform attention layer. The encoder-decoder architecture of GMAN encodes historical traffic features and predicts future traffic conditions, with the transform attention layer converting encoded features to generate future representations. Experimental results on two real-world datasets (Xiamen and PeMS) demonstrate that GMAN outperforms state-of-the-art methods, achieving up to 4% improvement in Mean Absolute Error (MAE) for 1-hour ahead predictions. The model also shows superior fault-tolerance and computational efficiency.