24 Jun 2017 | Grégoire Montavon, Wojciech Samek, Klaus-Robert Müller
This paper provides an introduction to interpreting and understanding deep neural network models and their predictions. It discusses various techniques for interpretation, including activation maximization (AM), sensitivity analysis, Taylor decomposition, and layer-wise relevance propagation (LRP). The paper emphasizes the importance of interpretability in applications where the model's reliance on correct features must be guaranteed, such as in medicine and self-driving cars. It also highlights the role of LRP in explaining DNN decisions and provides practical recommendations for its implementation. The paper covers the theoretical foundations, practical applications, and quantitative evaluation of these techniques, aiming to bridge the gap between theory and real-world data.This paper provides an introduction to interpreting and understanding deep neural network models and their predictions. It discusses various techniques for interpretation, including activation maximization (AM), sensitivity analysis, Taylor decomposition, and layer-wise relevance propagation (LRP). The paper emphasizes the importance of interpretability in applications where the model's reliance on correct features must be guaranteed, such as in medicine and self-driving cars. It also highlights the role of LRP in explaining DNN decisions and provides practical recommendations for its implementation. The paper covers the theoretical foundations, practical applications, and quantitative evaluation of these techniques, aiming to bridge the gap between theory and real-world data.