Supplementary Material– Towards Balancing Preference and Performance through Adaptive Personalized Explainability

Supplementary Material– Towards Balancing Preference and Performance through Adaptive Personalized Explainability

March 11–14, 2024 | Unknown Author
This supplementary material presents a study on balancing preference and performance through adaptive personalized explainability. The study involves two user studies: a population study and a personalization study. In the population study, participants are exposed to three xAI modalities (language, feature-importance, and decision trees) and complete nine navigation tasks. They are then asked to rank their preference for each modality. In the personalization study, participants are randomly assigned to either an adaptive personalization strategy or a baseline condition, and they complete three tasks with each strategy. They also complete a preference survey after each strategy. Incorrect explanations were provided by including "red-herring" features, such as weather, traffic, or the president's motorcade. Participants were warned about these features at the beginning of the study. Correct explanations were those that did not include red-herring features and focused on relevant details like the shortest path or obstacles. The study also examined how participants responded to explanations that were incorrect or correct, and how they made decisions based on these explanations. The study found that participants were able to identify incorrect explanations, but they sometimes followed incorrect suggestions because they expected the direction to be correct. The study also found that balanced personalization was more effective than other personalization strategies in helping participants identify errant decision suggestions. The study used statistical tests to analyze the data, including ANOVA, Friedman's test, and Wilcoxon signed rank tests. The results showed that balanced personalization was significantly more preferred than other strategies. The study also found that the number of participants needed for a significant effect was higher than initially anticipated, and further research is needed to confirm these findings.This supplementary material presents a study on balancing preference and performance through adaptive personalized explainability. The study involves two user studies: a population study and a personalization study. In the population study, participants are exposed to three xAI modalities (language, feature-importance, and decision trees) and complete nine navigation tasks. They are then asked to rank their preference for each modality. In the personalization study, participants are randomly assigned to either an adaptive personalization strategy or a baseline condition, and they complete three tasks with each strategy. They also complete a preference survey after each strategy. Incorrect explanations were provided by including "red-herring" features, such as weather, traffic, or the president's motorcade. Participants were warned about these features at the beginning of the study. Correct explanations were those that did not include red-herring features and focused on relevant details like the shortest path or obstacles. The study also examined how participants responded to explanations that were incorrect or correct, and how they made decisions based on these explanations. The study found that participants were able to identify incorrect explanations, but they sometimes followed incorrect suggestions because they expected the direction to be correct. The study also found that balanced personalization was more effective than other personalization strategies in helping participants identify errant decision suggestions. The study used statistical tests to analyze the data, including ANOVA, Friedman's test, and Wilcoxon signed rank tests. The results showed that balanced personalization was significantly more preferred than other strategies. The study also found that the number of participants needed for a significant effect was higher than initially anticipated, and further research is needed to confirm these findings.
Reach us at info@study.space
[slides and audio] Towards Balancing Preference and Performance through Adaptive Personalized Explainability