2017 | Daniel Smilkov, Nikhil Thorat, Been Kim, Fernanda Viégas, Martin Wattenberg
The paper introduces SMOOTHGRAD, a method to enhance the visual clarity of gradient-based sensitivity maps in deep neural networks. These maps are used to identify pixels that strongly influence the final classification decision in image classifiers. The core idea of SMOOTHGRAD is to add noise to the input image and average the resulting sensitivity maps, which helps to reduce visual noise and improve the sharpness of the maps. The paper also discusses the de-noising effect of training with noise and evaluates SMOOTHGRAD against other gradient-based methods, demonstrating its effectiveness in providing more coherent and discriminative visualizations. The authors provide empirical evidence and theoretical arguments to support their findings and suggest future research directions, including the development of better metrics for comparing sensitivity maps and exploring the geometry of class score functions.The paper introduces SMOOTHGRAD, a method to enhance the visual clarity of gradient-based sensitivity maps in deep neural networks. These maps are used to identify pixels that strongly influence the final classification decision in image classifiers. The core idea of SMOOTHGRAD is to add noise to the input image and average the resulting sensitivity maps, which helps to reduce visual noise and improve the sharpness of the maps. The paper also discusses the de-noising effect of training with noise and evaluates SMOOTHGRAD against other gradient-based methods, demonstrating its effectiveness in providing more coherent and discriminative visualizations. The authors provide empirical evidence and theoretical arguments to support their findings and suggest future research directions, including the development of better metrics for comparing sensitivity maps and exploring the geometry of class score functions.