YOLOv8-ACU: improved YOLOv8-pose for facial acupoint detection

YOLOv8-ACU: improved YOLOv8-pose for facial acupoint detection

01 February 2024 | Zijian Yuan, Pengwei Shao, Jinran Li, Yinuo Wang, Zixuan Zhu, Weijie Qiu, Buqun Chen, Yan Tang and Aiqing Han
This study introduces YOLOv8-ACU, an improved version of the YOLOv8-pose algorithm for facial acupoint detection. The model enhances acupoint feature extraction by integrating ECA attention, replaces the original neck module with a lighter Slim-neck module, and improves the loss function for GIoU. The YOLOv8-ACU model achieves high accuracy with mAP@0.5 of 97.5% and mAP@0.5–0.95 of 76.9% on self-constructed datasets, while reducing model parameters by 0.44M, model size by 0.82MB, and GFLOPs by 9.3%. The model demonstrates improved recognition accuracy, efficiency, and generalization ability, making it suitable for facial acupoint localization and detection. The study compares YOLOv8-ACU with other models, showing significant improvements in accuracy and efficiency. The model is evaluated on two datasets, Acupoint-I and Acupoint-II, and achieves high precision and recall. The results indicate that YOLOv8-ACU is a promising solution for facial acupoint detection, with potential applications in clinical practice. The study also discusses the limitations of the model and suggests future improvements, such as expanding the categories of acupoints and enhancing performance under different conditions. The research contributes to the development of efficient and accurate models for acupoint detection, with potential applications in healthcare and medical research.This study introduces YOLOv8-ACU, an improved version of the YOLOv8-pose algorithm for facial acupoint detection. The model enhances acupoint feature extraction by integrating ECA attention, replaces the original neck module with a lighter Slim-neck module, and improves the loss function for GIoU. The YOLOv8-ACU model achieves high accuracy with mAP@0.5 of 97.5% and mAP@0.5–0.95 of 76.9% on self-constructed datasets, while reducing model parameters by 0.44M, model size by 0.82MB, and GFLOPs by 9.3%. The model demonstrates improved recognition accuracy, efficiency, and generalization ability, making it suitable for facial acupoint localization and detection. The study compares YOLOv8-ACU with other models, showing significant improvements in accuracy and efficiency. The model is evaluated on two datasets, Acupoint-I and Acupoint-II, and achieves high precision and recall. The results indicate that YOLOv8-ACU is a promising solution for facial acupoint detection, with potential applications in clinical practice. The study also discusses the limitations of the model and suggests future improvements, such as expanding the categories of acupoints and enhancing performance under different conditions. The research contributes to the development of efficient and accurate models for acupoint detection, with potential applications in healthcare and medical research.
Reach us at info@study.space