HARGPT: Are LLMs Zero-Shot Human Activity Recognizers?

HARGPT: Are LLMs Zero-Shot Human Activity Recognizers?

5 Mar 2024 | Sijie Ji*, Xinzhe Zheng*, Chenshu Wu
The paper "HARGPT: Are LLMs Zero-Shot Human Activity Recognizers?" by Sijie Ji, Xinzhe Zheng, and Chenshu Wu from the University of Hong Kong explores the potential of Large Language Models (LLMs) in zero-shot human activity recognition (HAR) using raw IMU data. The study, named HARGPT, demonstrates that LLMs can effectively recognize human activities without the need for fine-tuning or domain-specific expertise, achieving an average accuracy of 80% on unseen data. The research uses two datasets—Capture24 and HHAR—to evaluate the performance of LLMs, comparing them against traditional machine learning and deep learning models. The experiments show that LLMs, particularly GPT4, outperform baselines in both inter-class difference and inter-class similarity tasks, highlighting their robustness and potential for analyzing raw sensor data in Cyber-Physical Systems (CPS). The study also discusses the logical reasoning abilities of LLMs and the need to address issues like perfunctory answers to improve real-time interaction with CPS. The findings suggest that LLMs have significant potential in interpreting the physical world and could transform the field of HAR.The paper "HARGPT: Are LLMs Zero-Shot Human Activity Recognizers?" by Sijie Ji, Xinzhe Zheng, and Chenshu Wu from the University of Hong Kong explores the potential of Large Language Models (LLMs) in zero-shot human activity recognition (HAR) using raw IMU data. The study, named HARGPT, demonstrates that LLMs can effectively recognize human activities without the need for fine-tuning or domain-specific expertise, achieving an average accuracy of 80% on unseen data. The research uses two datasets—Capture24 and HHAR—to evaluate the performance of LLMs, comparing them against traditional machine learning and deep learning models. The experiments show that LLMs, particularly GPT4, outperform baselines in both inter-class difference and inter-class similarity tasks, highlighting their robustness and potential for analyzing raw sensor data in Cyber-Physical Systems (CPS). The study also discusses the logical reasoning abilities of LLMs and the need to address issues like perfunctory answers to improve real-time interaction with CPS. The findings suggest that LLMs have significant potential in interpreting the physical world and could transform the field of HAR.
Reach us at info@study.space
[slides] HARGPT%3A Are LLMs Zero-Shot Human Activity Recognizers%3F | StudySpace