EyeEcho: Continuous and Low-power Facial Expression Tracking on Glasses

EyeEcho: Continuous and Low-power Facial Expression Tracking on Glasses

May 11–16, 2024, Honolulu, HI, USA | Ke Li, Ruidong Zhang, Siyuan Chen, Boao Chen, Mose Sakashita, François Guimbretière, Cheng Zhang
**EyeEcho: Continuous and Low-power Facial Expression Tracking on Glasses** **Abstract:** This paper introduces EyeEcho, a minimally-obtrusive acoustic sensing system designed to enable glasses to continuously monitor facial expressions. It uses two pairs of speakers and microphones mounted on glasses to emit encoded inaudible acoustic signals, capturing subtle skin deformations associated with facial expressions. The reflected signals are processed through a customized machine-learning pipeline to estimate full facial movements. EyeEcho samples at 83.3 Hz with a low power consumption of 167mW. User studies involving 12 participants demonstrate highly accurate tracking performance across different real-world scenarios, including sitting, walking, and remounting devices. A semi-in-the-wild study involving 10 participants further validates EyeEcho's performance in naturalistic settings. The system can also be deployed on a commercial smartphone for real-time facial expression tracking. **Key Contributions:** - Development of a continuous facial expression tracking system on glasses using low-power acoustic sensing. - Conducting studies to evaluate EyeEcho's performance in both lab and real-world settings. - Developing a real-time processing system on an Android phone. **Related Work:** The paper discusses existing non-wearable and wearable facial expression tracking technologies, highlighting their limitations and comparing them with EyeEcho in terms of tracking capability, obtrusiveness, power consumption, and performance. **Background:** The paper defines continuous facial expression tracking and explains the principle of operation of EyeEcho, including the use of FMCW-based acoustic sensing to capture skin deformations. **Design and Implementation:** The hardware prototype design includes MEMS microphones, speakers, and a Bluetooth module. The system is designed to be minimally obtrusive and lightweight. The deep learning model uses a sliding window approach to process acoustic data and estimate facial expressions. **Evaluation:** The paper presents results from an in-lab study and a semi-in-the-wild study, demonstrating EyeEcho's ability to track facial expressions accurately and reliably in various real-world scenarios. The system's performance is evaluated using metrics such as Mean Absolute Error (MAE), Lower-face MAE (LMAE), Upper-face MAE (UMAE), Percentage of Frames with LMAE under 40 (PL40), and Percentage of Frames with UMAE under 60 (PU60). **Conclusion:** EyeEcho offers a significant advancement in the field of continuous facial expression tracking on glasses by providing a low-power, minimally-obtrusive solution that can be deployed on commercial smartphones for real-time tracking.**EyeEcho: Continuous and Low-power Facial Expression Tracking on Glasses** **Abstract:** This paper introduces EyeEcho, a minimally-obtrusive acoustic sensing system designed to enable glasses to continuously monitor facial expressions. It uses two pairs of speakers and microphones mounted on glasses to emit encoded inaudible acoustic signals, capturing subtle skin deformations associated with facial expressions. The reflected signals are processed through a customized machine-learning pipeline to estimate full facial movements. EyeEcho samples at 83.3 Hz with a low power consumption of 167mW. User studies involving 12 participants demonstrate highly accurate tracking performance across different real-world scenarios, including sitting, walking, and remounting devices. A semi-in-the-wild study involving 10 participants further validates EyeEcho's performance in naturalistic settings. The system can also be deployed on a commercial smartphone for real-time facial expression tracking. **Key Contributions:** - Development of a continuous facial expression tracking system on glasses using low-power acoustic sensing. - Conducting studies to evaluate EyeEcho's performance in both lab and real-world settings. - Developing a real-time processing system on an Android phone. **Related Work:** The paper discusses existing non-wearable and wearable facial expression tracking technologies, highlighting their limitations and comparing them with EyeEcho in terms of tracking capability, obtrusiveness, power consumption, and performance. **Background:** The paper defines continuous facial expression tracking and explains the principle of operation of EyeEcho, including the use of FMCW-based acoustic sensing to capture skin deformations. **Design and Implementation:** The hardware prototype design includes MEMS microphones, speakers, and a Bluetooth module. The system is designed to be minimally obtrusive and lightweight. The deep learning model uses a sliding window approach to process acoustic data and estimate facial expressions. **Evaluation:** The paper presents results from an in-lab study and a semi-in-the-wild study, demonstrating EyeEcho's ability to track facial expressions accurately and reliably in various real-world scenarios. The system's performance is evaluated using metrics such as Mean Absolute Error (MAE), Lower-face MAE (LMAE), Upper-face MAE (UMAE), Percentage of Frames with LMAE under 40 (PL40), and Percentage of Frames with UMAE under 60 (PU60). **Conclusion:** EyeEcho offers a significant advancement in the field of continuous facial expression tracking on glasses by providing a low-power, minimally-obtrusive solution that can be deployed on commercial smartphones for real-time tracking.
Reach us at info@study.space
[slides and audio] EyeEcho%3A Continuous and Low-power Facial Expression Tracking on Glasses