| Tuo Feng, Ruijie Quan, Xiaohan Wang, Wenguan Wang, Yi Yang
Interpretable3D is an ad-hoc interpretable classifier for 3D point clouds that provides reliable explanations without any embarrassing nuances. It is an intuitive case-based classifier that allows users to understand how queries are embedded within past observations in prototype sets. The classifier has two iterative training steps: Prototype Estimation and Prototype Optimization. In Prototype Estimation, one prototype is updated with the mean of embeddings within the same sub-class. In Prototype Optimization, estimated prototypes are penalized or rewarded based on their performance in prediction. The mean of embeddings has a clear statistical meaning, representing class sub-centers. Additionally, prototypes are updated with their most similar observations in the last few epochs. Interpretable3D classifies new samples according to prototypes. It has been evaluated on four popular point cloud models: DGCNN, PointNet2, PointMLP, and PointNeXt. The results show that Interpretable3D achieves comparable or even better performance than softmax-based black-box models in tasks of 3D shape classification and part segmentation. The code is available at github.com/FengZicai/Interpretable3D. The proposed algorithm is an interpretable nearest neighbor based prototype classifier. It uses prototypes to represent past observations and assigns labels based on the similarity between samples and prototypes. The classifier is fully online and end-to-end trained. It uses the most standard and simple initialization strategy: randomly selecting S data samples per class as the initial prototypes. During each training iteration, the prototypes are first updated and then optimized. The prototypes are updated with the mean of embeddings from the same subclass in a momentum manner. The labels of the most similar prototypes are assigned to the samples. Prototype Optimization involves penalizing or rewarding prototypes according to the prediction. In the final few epochs, the prototypes are updated with the features of the most representative training samples. Interpretable3D provides a level of interpretability that is absent in existing post-hoc 3D explanation models. It allows users to understand how the system works and how decisions are made. The algorithm has been evaluated on three well-known public benchmarks: ModelNet40, ScanObjectNN, and ShapeNetPart. The results show that Interpretable3D achieves comparable or even better performance than softmax-based black-box models. The algorithm provides intrinsic interpretability to the classification and part segmentation results. This is a key advantage of the approach, as it allows for better transparency and comprehensibility of the AI decision-making process. The algorithm is designed to be inherently interpretable, as it reveals what the representation means and how the embedding queries typical past observations from the prototype sets. The algorithm is based on the intuitive concept of selecting the most similar prototype for new samples. It is built upon the concept of prototype-based classification, where prototypes are used to represent past observations. The algorithm has been shown to be effective in tasks of 3D shape classification and part segmentation. The algorithm is also effective in tasksInterpretable3D is an ad-hoc interpretable classifier for 3D point clouds that provides reliable explanations without any embarrassing nuances. It is an intuitive case-based classifier that allows users to understand how queries are embedded within past observations in prototype sets. The classifier has two iterative training steps: Prototype Estimation and Prototype Optimization. In Prototype Estimation, one prototype is updated with the mean of embeddings within the same sub-class. In Prototype Optimization, estimated prototypes are penalized or rewarded based on their performance in prediction. The mean of embeddings has a clear statistical meaning, representing class sub-centers. Additionally, prototypes are updated with their most similar observations in the last few epochs. Interpretable3D classifies new samples according to prototypes. It has been evaluated on four popular point cloud models: DGCNN, PointNet2, PointMLP, and PointNeXt. The results show that Interpretable3D achieves comparable or even better performance than softmax-based black-box models in tasks of 3D shape classification and part segmentation. The code is available at github.com/FengZicai/Interpretable3D. The proposed algorithm is an interpretable nearest neighbor based prototype classifier. It uses prototypes to represent past observations and assigns labels based on the similarity between samples and prototypes. The classifier is fully online and end-to-end trained. It uses the most standard and simple initialization strategy: randomly selecting S data samples per class as the initial prototypes. During each training iteration, the prototypes are first updated and then optimized. The prototypes are updated with the mean of embeddings from the same subclass in a momentum manner. The labels of the most similar prototypes are assigned to the samples. Prototype Optimization involves penalizing or rewarding prototypes according to the prediction. In the final few epochs, the prototypes are updated with the features of the most representative training samples. Interpretable3D provides a level of interpretability that is absent in existing post-hoc 3D explanation models. It allows users to understand how the system works and how decisions are made. The algorithm has been evaluated on three well-known public benchmarks: ModelNet40, ScanObjectNN, and ShapeNetPart. The results show that Interpretable3D achieves comparable or even better performance than softmax-based black-box models. The algorithm provides intrinsic interpretability to the classification and part segmentation results. This is a key advantage of the approach, as it allows for better transparency and comprehensibility of the AI decision-making process. The algorithm is designed to be inherently interpretable, as it reveals what the representation means and how the embedding queries typical past observations from the prototype sets. The algorithm is based on the intuitive concept of selecting the most similar prototype for new samples. It is built upon the concept of prototype-based classification, where prototypes are used to represent past observations. The algorithm has been shown to be effective in tasks of 3D shape classification and part segmentation. The algorithm is also effective in tasks