March 2024 | GIOVANNI CIATTO, FEDERICO SABBATINI, ANDREA AGIOLLO, MATTEO MAGNINI, ANDREA OMICINI
This article presents a systematic literature review (SLR) on symbolic knowledge extraction (SKE) and symbolic knowledge injection (SKI) with sub-symbolic predictors. The authors aim to address the interpretability/performance trade-off in machine learning (ML) by promoting SKE and SKI as complementary activities. SKE involves extracting symbolic knowledge from sub-symbolic predictors, while SKI involves injecting symbolic knowledge into them. The study analyzes 132 SKE methods and 117 SKI methods, categorizing them based on purpose, operation, input/output data, and predictor types. The authors propose general meta-models and taxonomies for SKE and SKI, highlighting the importance of symbolic knowledge representation for interpretability. The review emphasizes the need for explainable AI (XAI) to enhance transparency and trust in ML systems. The study also discusses the challenges and opportunities in SKE/SKI, including the trade-off between predictive performance and interpretability. The authors conclude that SKE/SKI methods can help bridge the gap between sub-symbolic ML and symbolic AI, enabling more interpretable and controllable AI systems. The review provides a comprehensive overview of existing methods, their implementations, and their potential applications in various domains. The study underscores the importance of symbolic knowledge representation in AI, advocating for a balanced approach that considers both performance and interpretability in ML systems.This article presents a systematic literature review (SLR) on symbolic knowledge extraction (SKE) and symbolic knowledge injection (SKI) with sub-symbolic predictors. The authors aim to address the interpretability/performance trade-off in machine learning (ML) by promoting SKE and SKI as complementary activities. SKE involves extracting symbolic knowledge from sub-symbolic predictors, while SKI involves injecting symbolic knowledge into them. The study analyzes 132 SKE methods and 117 SKI methods, categorizing them based on purpose, operation, input/output data, and predictor types. The authors propose general meta-models and taxonomies for SKE and SKI, highlighting the importance of symbolic knowledge representation for interpretability. The review emphasizes the need for explainable AI (XAI) to enhance transparency and trust in ML systems. The study also discusses the challenges and opportunities in SKE/SKI, including the trade-off between predictive performance and interpretability. The authors conclude that SKE/SKI methods can help bridge the gap between sub-symbolic ML and symbolic AI, enabling more interpretable and controllable AI systems. The review provides a comprehensive overview of existing methods, their implementations, and their potential applications in various domains. The study underscores the importance of symbolic knowledge representation in AI, advocating for a balanced approach that considers both performance and interpretability in ML systems.