May 11–16, 2024 | Aditya Bhattacharya, Simone Stumpf, Lucija Gosak, Gregor Stiglic, Katrien Verbert
EXMOS: Explanatory Model Steering Through Multifaceted Explanations and Data Configurations
Aditya Bhattacharya, Simone Stumpf, Lucija Gosak, Gregor Stiglic, and Katrien Verbert present EXMOS, a system that enables users to fine-tune prediction models using Explainable AI (XAI) and Interactive Machine Learning (IML). The research explores how different types of global explanations support domain experts, such as healthcare professionals, in improving ML models through manual and automated data configurations. The system dynamically updates explanations and predicted outcomes.
The study investigates the influence of data-centric and model-centric global explanations on healthcare experts' trust, understanding, and model improvement. Quantitative (n=70) and qualitative (n=30) studies were conducted with healthcare experts to explore the impact of different explanations. Results show that the hybrid version of explanations, combining data-centric and model-centric explanations, was most effective for model steering. While data-centric explanations enhanced understanding of post-configuration system changes, the hybrid approach demonstrated the highest effectiveness.
The research contributes three main findings: (1) the instantiation of generic designs for global data-centric, model-centric, and hybrid explanations through the healthcare-focused EXMOS system; (2) the evaluation of the impact of these explanations on model steering by healthcare experts, showing the hybrid combination to be most effective; and (3) guidelines for designing explanations and data configuration mechanisms to facilitate domain experts in model steering.
The study highlights the importance of involving domain experts in model steering, as their domain knowledge is crucial for identifying potentially misleading and biased predictors. Current one-off explanations, such as feature importances or saliency maps, are insufficient for supporting these users. Instead, domain experts require interactive explanations of the training data to better understand the model and improve prediction models by configuring the training data.
The research also emphasizes the need for data-centric AI, as high-quality training data is critical for the increased adoption of AI systems in high-stake domains like healthcare. The study shows that the hybrid combination of data-centric and model-centric explanations is most effective for improving prediction model performance, despite a higher perceived task load. The findings suggest that domain experts benefit from a combination of explanations that provide a better understanding of the training data and the model's behavior. The study also highlights the importance of providing domain experts with more control over the prediction system to improve model steering.EXMOS: Explanatory Model Steering Through Multifaceted Explanations and Data Configurations
Aditya Bhattacharya, Simone Stumpf, Lucija Gosak, Gregor Stiglic, and Katrien Verbert present EXMOS, a system that enables users to fine-tune prediction models using Explainable AI (XAI) and Interactive Machine Learning (IML). The research explores how different types of global explanations support domain experts, such as healthcare professionals, in improving ML models through manual and automated data configurations. The system dynamically updates explanations and predicted outcomes.
The study investigates the influence of data-centric and model-centric global explanations on healthcare experts' trust, understanding, and model improvement. Quantitative (n=70) and qualitative (n=30) studies were conducted with healthcare experts to explore the impact of different explanations. Results show that the hybrid version of explanations, combining data-centric and model-centric explanations, was most effective for model steering. While data-centric explanations enhanced understanding of post-configuration system changes, the hybrid approach demonstrated the highest effectiveness.
The research contributes three main findings: (1) the instantiation of generic designs for global data-centric, model-centric, and hybrid explanations through the healthcare-focused EXMOS system; (2) the evaluation of the impact of these explanations on model steering by healthcare experts, showing the hybrid combination to be most effective; and (3) guidelines for designing explanations and data configuration mechanisms to facilitate domain experts in model steering.
The study highlights the importance of involving domain experts in model steering, as their domain knowledge is crucial for identifying potentially misleading and biased predictors. Current one-off explanations, such as feature importances or saliency maps, are insufficient for supporting these users. Instead, domain experts require interactive explanations of the training data to better understand the model and improve prediction models by configuring the training data.
The research also emphasizes the need for data-centric AI, as high-quality training data is critical for the increased adoption of AI systems in high-stake domains like healthcare. The study shows that the hybrid combination of data-centric and model-centric explanations is most effective for improving prediction model performance, despite a higher perceived task load. The findings suggest that domain experts benefit from a combination of explanations that provide a better understanding of the training data and the model's behavior. The study also highlights the importance of providing domain experts with more control over the prediction system to improve model steering.