CrossHAR: Generalizing Cross-dataset Human Activity Recognition via Hierarchical Self-Supervised Pretraining

CrossHAR: Generalizing Cross-dataset Human Activity Recognition via Hierarchical Self-Supervised Pretraining

June 2024 | ZHIQING HONG, JD Logistics, China and Rutgers University, USA ZELONG LI, JD Logistics, China SHUXIN ZHONG, Rutgers University, USA WENJUN LYU, Rutgers University, USA HAOTIAN WANG, JD Logistics, China YI DING, University of Texas at Dallas, USA TIAN HE, JD Logistics, China DESHENG ZHANG, Rutgers University, USA
**CrossHAR: Generalizing Cross-dataset Human Activity Recognition via Hierarchical Self-Supervised Pretraining** The paper addresses the challenge of domain shift in cross-dataset human activity recognition (HAR), which occurs due to variations in users, device types, and sensor placements between source and target datasets. To improve model performance on unseen target datasets, CrossHAR employs a three-step approach: (i) physically-informed sensor data augmentation to diversify the data distribution and augment raw sensor data, (ii) hierarchical self-supervised pretraining to develop a generalizable representation using augmented data, and (iii) fine-tuning with a small set of labeled data from the source dataset to enhance performance in cross-dataset HAR. Extensive experiments on multiple real-world HAR datasets show that CrossHAR outperforms state-of-the-art methods by 10.83% in accuracy, demonstrating its effectiveness in generalizing to unseen target datasets. The paper also evaluates the impact of different components and the scalability of CrossHAR with larger unlabeled datasets.**CrossHAR: Generalizing Cross-dataset Human Activity Recognition via Hierarchical Self-Supervised Pretraining** The paper addresses the challenge of domain shift in cross-dataset human activity recognition (HAR), which occurs due to variations in users, device types, and sensor placements between source and target datasets. To improve model performance on unseen target datasets, CrossHAR employs a three-step approach: (i) physically-informed sensor data augmentation to diversify the data distribution and augment raw sensor data, (ii) hierarchical self-supervised pretraining to develop a generalizable representation using augmented data, and (iii) fine-tuning with a small set of labeled data from the source dataset to enhance performance in cross-dataset HAR. Extensive experiments on multiple real-world HAR datasets show that CrossHAR outperforms state-of-the-art methods by 10.83% in accuracy, demonstrating its effectiveness in generalizing to unseen target datasets. The paper also evaluates the impact of different components and the scalability of CrossHAR with larger unlabeled datasets.
Reach us at info@study.space