Evaluating the visualization of what a Deep Neural Network has learned

Evaluating the visualization of what a Deep Neural Network has learned

21 Sep 2015 | Wojciech Samek† Member, IEEE, Alexander Binder†, Grégoire Montavon, Sebastian Bach, and Klaus-Robert Müller, Member, IEEE
This paper evaluates the quality of heatmaps generated by three methods for visualizing what a Deep Neural Network (DNN) has learned: sensitivity-based analysis, deconvolution, and Layer-wise Relevance Propagation (LRP). The goal is to provide an objective measure for assessing the quality of these heatmaps, which are used to explain the decisions made by DNNs. The authors compare the performance of these methods on three large datasets: SUN397, ILSVRC2012, and MIT Places. They find that LRP provides a more accurate and interpretable explanation of DNN decisions compared to the other two methods. LRP is based on a conservation principle that ensures that the relevance of each pixel is preserved through the network layers, allowing for a more accurate interpretation of the classifier's decision. The authors also propose a general framework for evaluating heatmaps using region perturbation, which involves removing information from the image in a specific order and measuring the impact on the classifier's decision. The results show that LRP heatmaps have higher quality and are more effective at identifying relevant regions in the image. The paper also discusses the use of heatmaps for unsupervised assessment of neural network performance and highlights the importance of objective evaluation methods for understanding DNN decisions.This paper evaluates the quality of heatmaps generated by three methods for visualizing what a Deep Neural Network (DNN) has learned: sensitivity-based analysis, deconvolution, and Layer-wise Relevance Propagation (LRP). The goal is to provide an objective measure for assessing the quality of these heatmaps, which are used to explain the decisions made by DNNs. The authors compare the performance of these methods on three large datasets: SUN397, ILSVRC2012, and MIT Places. They find that LRP provides a more accurate and interpretable explanation of DNN decisions compared to the other two methods. LRP is based on a conservation principle that ensures that the relevance of each pixel is preserved through the network layers, allowing for a more accurate interpretation of the classifier's decision. The authors also propose a general framework for evaluating heatmaps using region perturbation, which involves removing information from the image in a specific order and measuring the impact on the classifier's decision. The results show that LRP heatmaps have higher quality and are more effective at identifying relevant regions in the image. The paper also discusses the use of heatmaps for unsupervised assessment of neural network performance and highlights the importance of objective evaluation methods for understanding DNN decisions.
Reach us at info@study.space