Received: 12 November 2012 / Revised: 2 August 2013 / Accepted: 17 August 2013 / Published online: 30 August 2013 | Erik Štrumbelj · Igor Kononenko
The paper by Erik Štrumbelj and Igor Kononenko introduces a sensitivity analysis-based method for explaining prediction models, applicable to any classification or regression model. This method differs from existing general methods by perturbing all subsets of input features, thereby accounting for interactions and redundancies between features. When applied to additive models, it is equivalent to commonly used model-specific methods. The authors illustrate the method's effectiveness through examples from artificial and real-world datasets and an empirical analysis of running times. A controlled experiment with 122 participants suggests that the method's explanations enhance understanding of the model. The paper emphasizes the importance of model interpretability in decision support systems, particularly in risk-sensitive domains like finance and medicine, where trust in the model is crucial. The key component of the explanation method is the feature contributions, which are calculated as the situational importance of each feature, providing a clear overview of how each feature influences the prediction.The paper by Erik Štrumbelj and Igor Kononenko introduces a sensitivity analysis-based method for explaining prediction models, applicable to any classification or regression model. This method differs from existing general methods by perturbing all subsets of input features, thereby accounting for interactions and redundancies between features. When applied to additive models, it is equivalent to commonly used model-specific methods. The authors illustrate the method's effectiveness through examples from artificial and real-world datasets and an empirical analysis of running times. A controlled experiment with 122 participants suggests that the method's explanations enhance understanding of the model. The paper emphasizes the importance of model interpretability in decision support systems, particularly in risk-sensitive domains like finance and medicine, where trust in the model is crucial. The key component of the explanation method is the feature contributions, which are calculated as the situational importance of each feature, providing a clear overview of how each feature influences the prediction.