15 Jan 2024 | LOGAN CUMMINS, ALEX SOMMERS, SOMAYEH BAKHTIARI RAMEZANI, SUDIP MITTAL, JOSEPH JABOUR, MARIA SEALE, SHAHRAM RAHIMI
This survey explores explainable predictive maintenance (XPM), which integrates explainable AI (XAI) and interpretable machine learning (iML) to enhance trust in predictive maintenance (PdM) systems. PdM aims to predict system failures and optimize maintenance schedules using AI and machine learning. As PdM is applied to critical systems, the need for transparency and interpretability in AI models becomes essential. XAI and iML provide methods to explain AI decisions, thereby increasing user trust and ensuring the reliability of PdM systems.
The survey follows the PRISMA 2020 guidelines to systematically review current XAI and iML methods applied to PdM. It categorizes these methods into model-agnostic, model-specific, and hybrid approaches. Model-agnostic methods, such as SHAP and LIME, are applicable to various models and provide explanations based on feature importance. Model-specific methods, like CAM and GradCAM, leverage the architecture of the model to generate explanations. The survey also discusses challenges in PdM, including the need for explainability in complex models and the importance of trust in AI systems.
The survey highlights the growing interest in XAI for PdM, as it enables the interpretation of complex models and improves user confidence. It also discusses the application of XAI in various PdM tasks, such as anomaly detection, fault diagnosis, and prognosis. The survey identifies key challenges, including the need for explainable methods in black-box models and the importance of interpretability in decision-making processes. It also discusses future research directions, including the development of more effective explainable methods and the integration of XAI into industrial applications.
The survey concludes that XAI and iML are essential for the development of trustworthy and reliable PdM systems. By providing explanations for AI decisions, these methods enhance the transparency and interpretability of PdM systems, making them more effective in critical applications. The survey also emphasizes the importance of continued research in XAI and iML to address the challenges and opportunities in PdM.This survey explores explainable predictive maintenance (XPM), which integrates explainable AI (XAI) and interpretable machine learning (iML) to enhance trust in predictive maintenance (PdM) systems. PdM aims to predict system failures and optimize maintenance schedules using AI and machine learning. As PdM is applied to critical systems, the need for transparency and interpretability in AI models becomes essential. XAI and iML provide methods to explain AI decisions, thereby increasing user trust and ensuring the reliability of PdM systems.
The survey follows the PRISMA 2020 guidelines to systematically review current XAI and iML methods applied to PdM. It categorizes these methods into model-agnostic, model-specific, and hybrid approaches. Model-agnostic methods, such as SHAP and LIME, are applicable to various models and provide explanations based on feature importance. Model-specific methods, like CAM and GradCAM, leverage the architecture of the model to generate explanations. The survey also discusses challenges in PdM, including the need for explainability in complex models and the importance of trust in AI systems.
The survey highlights the growing interest in XAI for PdM, as it enables the interpretation of complex models and improves user confidence. It also discusses the application of XAI in various PdM tasks, such as anomaly detection, fault diagnosis, and prognosis. The survey identifies key challenges, including the need for explainable methods in black-box models and the importance of interpretability in decision-making processes. It also discusses future research directions, including the development of more effective explainable methods and the integration of XAI into industrial applications.
The survey concludes that XAI and iML are essential for the development of trustworthy and reliable PdM systems. By providing explanations for AI decisions, these methods enhance the transparency and interpretability of PdM systems, making them more effective in critical applications. The survey also emphasizes the importance of continued research in XAI and iML to address the challenges and opportunities in PdM.