This Looks Like That: Deep Learning for Interpretable Image Recognition

This Looks Like That: Deep Learning for Interpretable Image Recognition

28 Dec 2019 | Chaofan Chen, Oscar Li, Chaofan Tao, Alina Jade Barnett, Jonathan Su, Cynthia Rudin
This paper introduces a deep network architecture called *prototypical part network* (ProtoPNet) designed to achieve interpretable image recognition. ProtoPNet reasons about images by finding prototypical parts and combining evidence from these prototypes to make classifications. The network uses only image-level labels for training, without part annotations. Experiments on the CUB-200-2011 and Stanford Cars datasets show that ProtoPNet can achieve comparable accuracy to non-interpretable models and outperforms them in terms of interpretability. The network's reasoning process is similar to how humans explain their classification decisions, making it more transparent and understandable. The paper also discusses the training algorithm, prototype visualization, and comparisons with other models, highlighting the unique interpretability offered by ProtoPNet.This paper introduces a deep network architecture called *prototypical part network* (ProtoPNet) designed to achieve interpretable image recognition. ProtoPNet reasons about images by finding prototypical parts and combining evidence from these prototypes to make classifications. The network uses only image-level labels for training, without part annotations. Experiments on the CUB-200-2011 and Stanford Cars datasets show that ProtoPNet can achieve comparable accuracy to non-interpretable models and outperforms them in terms of interpretability. The network's reasoning process is similar to how humans explain their classification decisions, making it more transparent and understandable. The paper also discusses the training algorithm, prototype visualization, and comparisons with other models, highlighting the unique interpretability offered by ProtoPNet.
Reach us at info@study.space