EXMOS: Explanatory Model Steering Through Multifaceted Explanations and Data Configurations

EXMOS: Explanatory Model Steering Through Multifaceted Explanations and Data Configurations

May 11–16, 2024 | Aditya Bhattacharya, Simone Stumpf, Lucija Gosak, Gregor Stiglic, Katrien Verbert
**EXMOS: Explanatory Model Steering Through Multifaceted Explanations and Data Configurations** This research explores the effectiveness of different types of global explanations in supporting healthcare experts in improving machine learning (ML) models through manual and automated data configurations. The study investigates the impact of data-centric and model-centric explanations on trust, understandability, and model improvement. Two user studies (n=70 quantitative, n=30 qualitative) were conducted to evaluate the influence of these explanations. **Key Findings:** 1. **Model Steering and Trust:** - **Hybrid Explanation (HYB):** Participants using HYB saw significant improvements in prediction model performance compared to those using Data-Centric Explanations (DCE) and Model-Centric Explanations (MCE). Despite a higher perceived task load, HYB participants were more effective in improving model accuracy. - **Trust and Understanding:** There were no significant differences in objective understanding, perceived understanding, or trust across different explanation types. However, data-centric explanations enhanced understanding of post-configuration system changes, and participants expressed higher trust in the system when data issues were transparently explained. - **Efficiency and Effectiveness:** HYB users were faster and more effective in manual configurations, while MCE users were less successful in both manual and automated configurations. This highlights the importance of providing domain experts with more control over the training data. **Design Implications:** - **Hybrid Explanation Dashboard:** Combining data-centric and model-centric explanations provides the most effective and efficient model steering. - **Data Configuration Mechanisms:** Manual configurations offer more control and flexibility, while automated configurations can address common data issues but require better transparency and user guidance. **Conclusion:** Model-centric explanations alone are insufficient for domain experts in model steering. Data-centric explanations are more valuable for improving model performance and understanding post-configuration changes. The hybrid approach, despite a higher perceived task load, is the most effective for enhancing prediction models and user trust.**EXMOS: Explanatory Model Steering Through Multifaceted Explanations and Data Configurations** This research explores the effectiveness of different types of global explanations in supporting healthcare experts in improving machine learning (ML) models through manual and automated data configurations. The study investigates the impact of data-centric and model-centric explanations on trust, understandability, and model improvement. Two user studies (n=70 quantitative, n=30 qualitative) were conducted to evaluate the influence of these explanations. **Key Findings:** 1. **Model Steering and Trust:** - **Hybrid Explanation (HYB):** Participants using HYB saw significant improvements in prediction model performance compared to those using Data-Centric Explanations (DCE) and Model-Centric Explanations (MCE). Despite a higher perceived task load, HYB participants were more effective in improving model accuracy. - **Trust and Understanding:** There were no significant differences in objective understanding, perceived understanding, or trust across different explanation types. However, data-centric explanations enhanced understanding of post-configuration system changes, and participants expressed higher trust in the system when data issues were transparently explained. - **Efficiency and Effectiveness:** HYB users were faster and more effective in manual configurations, while MCE users were less successful in both manual and automated configurations. This highlights the importance of providing domain experts with more control over the training data. **Design Implications:** - **Hybrid Explanation Dashboard:** Combining data-centric and model-centric explanations provides the most effective and efficient model steering. - **Data Configuration Mechanisms:** Manual configurations offer more control and flexibility, while automated configurations can address common data issues but require better transparency and user guidance. **Conclusion:** Model-centric explanations alone are insufficient for domain experts in model steering. Data-centric explanations are more valuable for improving model performance and understanding post-configuration changes. The hybrid approach, despite a higher perceived task load, is the most effective for enhancing prediction models and user trust.
Reach us at info@study.space