GMAN: A Graph Multi-Attention Network for Traffic Prediction

GMAN: A Graph Multi-Attention Network for Traffic Prediction

26 Nov 2019 | Chuanpan Zheng, Xiaoliang Fan, Cheng Wang, Jianzhong Qi
GMAN: A Graph Multi-Attention Network for Traffic Prediction This paper proposes a graph multi-attention network (GMAN) for traffic prediction on road network graphs. The model addresses the challenges of long-term traffic prediction, including complex spatio-temporal correlations and error propagation. GMAN uses an encoder-decoder architecture with multiple spatio-temporal attention blocks to model the impact of spatio-temporal factors on traffic conditions. The encoder encodes input traffic features, while the decoder predicts future traffic conditions. A transform attention layer is used to convert encoded traffic features into future representations. The model also incorporates a spatio-temporal embedding to capture both graph structure and time information. The encoder and decoder are composed of stacked ST-Attention blocks, which include spatial and temporal attention mechanisms with gated fusion. The spatial attention mechanism dynamically assigns weights to different vertices based on their importance, while the temporal attention mechanism models non-linear temporal correlations. The gated fusion mechanism adaptively fuses spatial and temporal representations. The transform attention mechanism models direct relationships between historical and future time steps to alleviate error propagation. Experiments on two real-world traffic prediction tasks (traffic volume and speed prediction) show that GMAN outperforms state-of-the-art methods by up to 4% in MAE for 1-hour ahead predictions. The model is evaluated on the Xiamen and PeMS datasets, demonstrating superior performance and fault tolerance. GMAN achieves state-of-the-art results in traffic prediction, particularly in long-term forecasts. The model is effective in capturing complex spatio-temporal correlations and is suitable for other spatio-temporal prediction tasks.GMAN: A Graph Multi-Attention Network for Traffic Prediction This paper proposes a graph multi-attention network (GMAN) for traffic prediction on road network graphs. The model addresses the challenges of long-term traffic prediction, including complex spatio-temporal correlations and error propagation. GMAN uses an encoder-decoder architecture with multiple spatio-temporal attention blocks to model the impact of spatio-temporal factors on traffic conditions. The encoder encodes input traffic features, while the decoder predicts future traffic conditions. A transform attention layer is used to convert encoded traffic features into future representations. The model also incorporates a spatio-temporal embedding to capture both graph structure and time information. The encoder and decoder are composed of stacked ST-Attention blocks, which include spatial and temporal attention mechanisms with gated fusion. The spatial attention mechanism dynamically assigns weights to different vertices based on their importance, while the temporal attention mechanism models non-linear temporal correlations. The gated fusion mechanism adaptively fuses spatial and temporal representations. The transform attention mechanism models direct relationships between historical and future time steps to alleviate error propagation. Experiments on two real-world traffic prediction tasks (traffic volume and speed prediction) show that GMAN outperforms state-of-the-art methods by up to 4% in MAE for 1-hour ahead predictions. The model is evaluated on the Xiamen and PeMS datasets, demonstrating superior performance and fault tolerance. GMAN achieves state-of-the-art results in traffic prediction, particularly in long-term forecasts. The model is effective in capturing complex spatio-temporal correlations and is suitable for other spatio-temporal prediction tasks.
Reach us at info@study.space
[slides and audio] GMAN%3A A Graph Multi-Attention Network for Traffic Prediction