1 Feb 2024 | ZIKANG LENG, College of Computing, Georgia Institute of Technology, USA; AMITRAJIT BHATTACHARJEE, School of Interactive Computing, Georgia Institute of Technology, USA; HRUDHAI RAJASEKHAR, School of Interactive Computing, Georgia Institute of Technology, USA; LIZHE ZHANG, College of Engineering, Georgia Institute of Technology, USA; ELIZABETH BRUDA, College of Computing, Georgia Institute of Technology, USA; HYEOKHYEN KWON, Department of Biomedical Informatics, Emory University, USA; THOMAS PLOTZ, School of Interactive Computing, Georgia Institute of Technology, USA
The paper "IMUGPT 2.0: Language-Based Cross Modality Transfer for Sensor-Based Human Activity Recognition" by Zikang Leng, Amitrajit Bhattacharjee, Hrudhai Rajasekhar, Lizhe Zhang, Elizabeth Bruda, Hyeokhyen Kwon, and Thomas Plötz addresses the challenge of limited labeled datasets in human activity recognition (HAR). The authors propose IMUGPT 2.0, an enhanced version of the initial IMUGPT system, which uses language-based cross-modality transfer to generate virtual IMU data for HAR applications. Key contributions include:
1. **Diversity Metrics**: The introduction of diversity metrics to measure the diversity of textual descriptions and motion sequences, which helps in identifying when to stop generating data to ensure optimal efficiency and performance.
2. **Motion Filter**: A motion filter that identifies and removes irrelevant motion sequences generated by the motion synthesis model, ensuring the relevance of the virtual IMU data.
3. **Efficiency and Practicality**: The proposed enhancements allow for a 50% reduction in the time and computational resources required for data generation, making IMUGPT more practical for real-world applications.
The authors conduct a comprehensive evaluation of IMUGPT 2.0, comparing different large language models (LLMs) and motion synthesis models, and demonstrate that using GPT-3.5 and T2M-GPT for virtual IMU data generation yields the best downstream performance. The evaluation also shows that the proposed diversity metrics significantly reduce the time and resources needed for data generation, making IMUGPT 2.0 more viable for practical HAR applications.The paper "IMUGPT 2.0: Language-Based Cross Modality Transfer for Sensor-Based Human Activity Recognition" by Zikang Leng, Amitrajit Bhattacharjee, Hrudhai Rajasekhar, Lizhe Zhang, Elizabeth Bruda, Hyeokhyen Kwon, and Thomas Plötz addresses the challenge of limited labeled datasets in human activity recognition (HAR). The authors propose IMUGPT 2.0, an enhanced version of the initial IMUGPT system, which uses language-based cross-modality transfer to generate virtual IMU data for HAR applications. Key contributions include:
1. **Diversity Metrics**: The introduction of diversity metrics to measure the diversity of textual descriptions and motion sequences, which helps in identifying when to stop generating data to ensure optimal efficiency and performance.
2. **Motion Filter**: A motion filter that identifies and removes irrelevant motion sequences generated by the motion synthesis model, ensuring the relevance of the virtual IMU data.
3. **Efficiency and Practicality**: The proposed enhancements allow for a 50% reduction in the time and computational resources required for data generation, making IMUGPT more practical for real-world applications.
The authors conduct a comprehensive evaluation of IMUGPT 2.0, comparing different large language models (LLMs) and motion synthesis models, and demonstrate that using GPT-3.5 and T2M-GPT for virtual IMU data generation yields the best downstream performance. The evaluation also shows that the proposed diversity metrics significantly reduce the time and resources needed for data generation, making IMUGPT 2.0 more viable for practical HAR applications.