The paper introduces the Incremental Residual Concept Bottleneck Model (Res-CBM) to enhance the interpretability of deep neural networks (DNNs) by mapping visual representations to a set of interpretable concepts. The Res-CBM addresses the challenges of concept completeness, purity, and precision in Concept Bottleneck Models (CBMs). It employs a set of optimizable vectors to complete missing concepts and an incremental concept discovery module to convert these vectors into potential concepts in a candidate concept bank. The approach can be applied to any user-defined concept bank as a post-hoc processing method to improve CBM performance. The Concept Utilization Efficiency (CUE) metric is proposed to measure the descriptive efficiency of CBMs. Experiments show that the Res-CBM outperforms state-of-the-art methods in terms of both accuracy and efficiency, achieving comparable performance to black-box models across multiple datasets. The paper also discusses the limitations and future work, including the need for more efficient concept similarity calculation and parallel concept discovery techniques.The paper introduces the Incremental Residual Concept Bottleneck Model (Res-CBM) to enhance the interpretability of deep neural networks (DNNs) by mapping visual representations to a set of interpretable concepts. The Res-CBM addresses the challenges of concept completeness, purity, and precision in Concept Bottleneck Models (CBMs). It employs a set of optimizable vectors to complete missing concepts and an incremental concept discovery module to convert these vectors into potential concepts in a candidate concept bank. The approach can be applied to any user-defined concept bank as a post-hoc processing method to improve CBM performance. The Concept Utilization Efficiency (CUE) metric is proposed to measure the descriptive efficiency of CBMs. Experiments show that the Res-CBM outperforms state-of-the-art methods in terms of both accuracy and efficiency, achieving comparable performance to black-box models across multiple datasets. The paper also discusses the limitations and future work, including the need for more efficient concept similarity calculation and parallel concept discovery techniques.