MoodCapture: Depression Detection Using In-the-Wild Smartphone Images

MoodCapture: Depression Detection Using In-the-Wild Smartphone Images

May 11-16, 2024 | Subigya Nepal, Arvind Pillai, Weichen Wang, Tess Griffin, Amanda C. Collins, Michael Heinz, Damien Lekkas, Shayan Mirjafari, Matthew Nemesure, George Price, Nicholas C. Jacobson, Andrew T. Campbell
MoodCapture presents a novel approach to assess depression using images captured from smartphones in natural, everyday environments. The study collects over 125,000 photos from 177 participants diagnosed with major depressive disorder over 90 days. Images are captured while participants respond to the PHQ-8 depression survey question: "I have felt down, depressed, or hopeless." The analysis explores important image attributes such as angle, dominant colors, location, objects, and lighting. A random forest trained with face landmarks effectively classifies samples as depressed or non-depressed and predicts PHQ-8 scores. Post-hoc analysis provides insights through ablation studies, feature importance analysis, and bias assessment. The study evaluates user concerns about privacy and provides critical insights into privacy concerns for future mental health assessment tools. The study investigates the potential of machine learning and deep learning models trained using in-the-wild smartphone images for identifying depressive symptoms. The dataset includes 125,335 images from 177 participants, with a focus on naturalistic images captured using smartphones. The analysis examines various image characteristics such as illumination, location, phone angle, background color, and objects. The study evaluates the performance of machine learning and deep learning models for depression detection and PHQ-8 score prediction. A random forest trained with 3D face landmarks achieves a balanced accuracy of 0.60, Matthew's Correlation Coefficient (MCC) of 0.14, and Mean Absolute Error (MAE) of 130.31, a 6% improvement over baseline. The study also identifies important features for HCI design and reports on user acceptance regarding privacy concerns. The study contributes to the intersection of Human-Computer Interaction (HCI) research and mental health assessment by investigating the potential of machine learning and deep learning models trained using in-the-wild smartphone images for identifying depressive symptoms. The study highlights the importance of considering a range of methods, from deep learning models capable of learning complex features to traditional machine learning techniques that offer interpretability and simplicity. The results emphasize the importance of considering a range of methods for depression detection in naturalistic conditions. The study also discusses the limitations of the study and provides some concluding remarks. The study's findings have tangible, real-world implications, such as the potential benefits of early depression detection, timely interventions, improved clinical outcomes, and overall wellbeing for individuals.MoodCapture presents a novel approach to assess depression using images captured from smartphones in natural, everyday environments. The study collects over 125,000 photos from 177 participants diagnosed with major depressive disorder over 90 days. Images are captured while participants respond to the PHQ-8 depression survey question: "I have felt down, depressed, or hopeless." The analysis explores important image attributes such as angle, dominant colors, location, objects, and lighting. A random forest trained with face landmarks effectively classifies samples as depressed or non-depressed and predicts PHQ-8 scores. Post-hoc analysis provides insights through ablation studies, feature importance analysis, and bias assessment. The study evaluates user concerns about privacy and provides critical insights into privacy concerns for future mental health assessment tools. The study investigates the potential of machine learning and deep learning models trained using in-the-wild smartphone images for identifying depressive symptoms. The dataset includes 125,335 images from 177 participants, with a focus on naturalistic images captured using smartphones. The analysis examines various image characteristics such as illumination, location, phone angle, background color, and objects. The study evaluates the performance of machine learning and deep learning models for depression detection and PHQ-8 score prediction. A random forest trained with 3D face landmarks achieves a balanced accuracy of 0.60, Matthew's Correlation Coefficient (MCC) of 0.14, and Mean Absolute Error (MAE) of 130.31, a 6% improvement over baseline. The study also identifies important features for HCI design and reports on user acceptance regarding privacy concerns. The study contributes to the intersection of Human-Computer Interaction (HCI) research and mental health assessment by investigating the potential of machine learning and deep learning models trained using in-the-wild smartphone images for identifying depressive symptoms. The study highlights the importance of considering a range of methods, from deep learning models capable of learning complex features to traditional machine learning techniques that offer interpretability and simplicity. The results emphasize the importance of considering a range of methods for depression detection in naturalistic conditions. The study also discusses the limitations of the study and provides some concluding remarks. The study's findings have tangible, real-world implications, such as the potential benefits of early depression detection, timely interventions, improved clinical outcomes, and overall wellbeing for individuals.
Reach us at info@study.space