Dynamic Semantic-Based Spatial Graph Convolution Network for Skeleton-Based Human Action Recognition

Dynamic Semantic-Based Spatial Graph Convolution Network for Skeleton-Based Human Action Recognition

2024 | Jianyang Xie1,2, Yanda Meng2,6*, Yitian Zhao4, Anh Nguyen3, Xiaoyun Yang5, Yalin Zheng2,6 *
This paper introduces a dynamic semantic-based spatial graph convolution network (DS-GCN) for skeleton-based human action recognition. The proposed method addresses the limitations of existing graph convolutional networks (GCNs) by encoding joint and edge types implicitly in the skeleton topology. Two semantic modules, the joints type-aware adaptive topology and the edge type-aware adaptive topology, are introduced to capture the semantic information of joints and edges. These modules are combined with temporal convolution to form the DS-GCN framework. Extensive experiments on the NTU-RGB+D and Kinetics-400 datasets demonstrate that the proposed semantic modules are effective and generalizable, and the DS-GCN outperforms state-of-the-art methods. The main contributions of the paper include the implicit encoding of joint and edge types in GCNs and the development of a dynamic semantic-based graph neural network for skeleton-based human action recognition.This paper introduces a dynamic semantic-based spatial graph convolution network (DS-GCN) for skeleton-based human action recognition. The proposed method addresses the limitations of existing graph convolutional networks (GCNs) by encoding joint and edge types implicitly in the skeleton topology. Two semantic modules, the joints type-aware adaptive topology and the edge type-aware adaptive topology, are introduced to capture the semantic information of joints and edges. These modules are combined with temporal convolution to form the DS-GCN framework. Extensive experiments on the NTU-RGB+D and Kinetics-400 datasets demonstrate that the proposed semantic modules are effective and generalizable, and the DS-GCN outperforms state-of-the-art methods. The main contributions of the paper include the implicit encoding of joint and edge types in GCNs and the development of a dynamic semantic-based graph neural network for skeleton-based human action recognition.
Reach us at info@study.space
[slides] Dynamic Semantic-Based Spatial Graph Convolution Network for Skeleton-Based Human Action Recognition | StudySpace