Actional-Structural Graph Convolutional Networks for Skeleton-based Action Recognition

Actional-Structural Graph Convolutional Networks for Skeleton-based Action Recognition

26 Apr 2019 | Maosen Li, Siheng Chen, Xu Chen, Ya Zhang, Yanfeng Wang, and Qi Tian
This paper proposes the Actional-Structural Graph Convolutional Network (AS-GCN) for skeleton-based action recognition. AS-GCN combines actional links (A-links) and structural links (S-links) to capture richer joint dependencies. A-links are inferred from actions to capture action-specific latent dependencies, while S-links represent higher-order relationships in the skeleton graph. The AS-GCN network stacks actional-structural graph convolutions and temporal convolutions to learn both spatial and temporal features for action recognition. A future pose prediction head is added to capture more detailed action patterns through self-supervision. AS-GCN is validated on two large-scale datasets, NTURGB+D and Kinetics, achieving significant improvements over state-of-the-art methods. AS-GCN also shows promising results for future pose prediction. The proposed method effectively captures both action-specific and structural dependencies among joints, leading to improved performance in action recognition and future pose prediction.This paper proposes the Actional-Structural Graph Convolutional Network (AS-GCN) for skeleton-based action recognition. AS-GCN combines actional links (A-links) and structural links (S-links) to capture richer joint dependencies. A-links are inferred from actions to capture action-specific latent dependencies, while S-links represent higher-order relationships in the skeleton graph. The AS-GCN network stacks actional-structural graph convolutions and temporal convolutions to learn both spatial and temporal features for action recognition. A future pose prediction head is added to capture more detailed action patterns through self-supervision. AS-GCN is validated on two large-scale datasets, NTURGB+D and Kinetics, achieving significant improvements over state-of-the-art methods. AS-GCN also shows promising results for future pose prediction. The proposed method effectively captures both action-specific and structural dependencies among joints, leading to improved performance in action recognition and future pose prediction.
Reach us at info@study.space
Understanding Actional-Structural Graph Convolutional Networks for Skeleton-Based Action Recognition