RISE: Randomized Input Sampling for Explanation of Black-box Models

RISE: Randomized Input Sampling for Explanation of Black-box Models

25 Sep 2018 | Vitali Petsiuk, Abir Das, Kate Saenko
The paper "RISE: Randomized Input Sampling for Explanation of Black-box Models" addresses the challenge of explaining the decision-making process of deep neural networks, particularly those that process images and output class probabilities. The authors propose a method called RISE (Randomized Input Sampling for Explanation) to generate importance maps that indicate the salience of each pixel in the input image for the model's prediction. Unlike white-box approaches that rely on gradients or internal network states, RISE operates on black-box models by empirically estimating pixel importance through random masking of the input image and observing the corresponding outputs. RISE works by sub-sampling the input image with random masks and recording the model's response to each masked image. The final importance map is then generated as a linear combination of these masks, with weights derived from the output probabilities of the model on the masked images. This approach allows RISE to provide explanations for black-box models without requiring access to internal network parameters. The paper compares RISE with state-of-the-art methods using both automatic deletion/insertion metrics and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets, including PASCAL VOC, MSCOCO, and ImageNet, show that RISE matches or exceeds the performance of other methods, including white-box approaches. The authors also propose two causal evaluation metrics—deletion and insertion—to assess the quality of the explanations. The deletion metric measures the drop in class probability as important pixels are gradually removed, while the insertion metric captures the increase in probability as pixels are added according to the generated importance map. These metrics are designed to be human-agnostic and more effective at evaluating causal explanations. Overall, RISE provides a general and robust approach to explaining the decisions of black-box models, making it a valuable tool for improving transparency and trust in AI systems.The paper "RISE: Randomized Input Sampling for Explanation of Black-box Models" addresses the challenge of explaining the decision-making process of deep neural networks, particularly those that process images and output class probabilities. The authors propose a method called RISE (Randomized Input Sampling for Explanation) to generate importance maps that indicate the salience of each pixel in the input image for the model's prediction. Unlike white-box approaches that rely on gradients or internal network states, RISE operates on black-box models by empirically estimating pixel importance through random masking of the input image and observing the corresponding outputs. RISE works by sub-sampling the input image with random masks and recording the model's response to each masked image. The final importance map is then generated as a linear combination of these masks, with weights derived from the output probabilities of the model on the masked images. This approach allows RISE to provide explanations for black-box models without requiring access to internal network parameters. The paper compares RISE with state-of-the-art methods using both automatic deletion/insertion metrics and a pointing metric based on human-annotated object segments. Extensive experiments on several benchmark datasets, including PASCAL VOC, MSCOCO, and ImageNet, show that RISE matches or exceeds the performance of other methods, including white-box approaches. The authors also propose two causal evaluation metrics—deletion and insertion—to assess the quality of the explanations. The deletion metric measures the drop in class probability as important pixels are gradually removed, while the insertion metric captures the increase in probability as pixels are added according to the generated importance map. These metrics are designed to be human-agnostic and more effective at evaluating causal explanations. Overall, RISE provides a general and robust approach to explaining the decisions of black-box models, making it a valuable tool for improving transparency and trust in AI systems.
Reach us at info@study.space
Understanding RISE%3A Randomized Input Sampling for Explanation of Black-box Models