Blockchain and explainable AI for enhanced decision making in cyber threat detection

Blockchain and explainable AI for enhanced decision making in cyber threat detection

29 January 2024 | Prabhat Kumar, Danish Javeed, Randhir Kumar, A.K.M Najmul Islam
This paper presents a blockchain-enabled explainable AI (XAI) framework to enhance decision-making in cyber threat detection within Smart Healthcare Systems (SHS). The authors address the challenges of data integrity, authenticity, and interpretability in AI-based threat hunting, which are common issues in SHS due to the complex infrastructure and sensitive data involved. The key contributions of the paper are: 1. **Blockchain Mechanism**: A Clique Proof-of-Authority (C-PoA) consensus mechanism is implemented to ensure secure and immutable data exchange between multiple cloud vendors, preventing insider attacks and providing authenticated data for cyber threat detection. 2. **Deep Learning-Based Threat Detection Model**: A novel deep learning model combining Parallel Stacked Long Short Term Memory (PSLSTM) networks with a multi-head attention mechanism is developed to improve attack detection performance. This model shortens training time and enhances the detection of relevant features in healthcare datasets. 3. **Interpretability Mechanism**: The SHapley Additive exPlanations (SHAP) method is used to interpret the decisions made by the deep learning model, providing transparency and improving the credibility of the AI-based threat detection system. The paper also discusses the experimental setup, evaluation metrics, and results using two publicly available datasets: ToN-IoT and IoT Healthcare Security. The proposed framework demonstrates high accuracy, precision, recall, and F1-score, indicating its effectiveness in enhancing cyber threat detection in SHS. The theoretical contributions include a robust authentication scheme using blockchain, which ensures data integrity and trustworthiness, and an interpretable AI model that enhances the reliability and acceptance of decision-making processes.This paper presents a blockchain-enabled explainable AI (XAI) framework to enhance decision-making in cyber threat detection within Smart Healthcare Systems (SHS). The authors address the challenges of data integrity, authenticity, and interpretability in AI-based threat hunting, which are common issues in SHS due to the complex infrastructure and sensitive data involved. The key contributions of the paper are: 1. **Blockchain Mechanism**: A Clique Proof-of-Authority (C-PoA) consensus mechanism is implemented to ensure secure and immutable data exchange between multiple cloud vendors, preventing insider attacks and providing authenticated data for cyber threat detection. 2. **Deep Learning-Based Threat Detection Model**: A novel deep learning model combining Parallel Stacked Long Short Term Memory (PSLSTM) networks with a multi-head attention mechanism is developed to improve attack detection performance. This model shortens training time and enhances the detection of relevant features in healthcare datasets. 3. **Interpretability Mechanism**: The SHapley Additive exPlanations (SHAP) method is used to interpret the decisions made by the deep learning model, providing transparency and improving the credibility of the AI-based threat detection system. The paper also discusses the experimental setup, evaluation metrics, and results using two publicly available datasets: ToN-IoT and IoT Healthcare Security. The proposed framework demonstrates high accuracy, precision, recall, and F1-score, indicating its effectiveness in enhancing cyber threat detection in SHS. The theoretical contributions include a robust authentication scheme using blockchain, which ensures data integrity and trustworthiness, and an interpretable AI model that enhances the reliability and acceptance of decision-making processes.
Reach us at info@study.space
[slides] Blockchain and explainable AI for enhanced decision making in cyber threat detection | StudySpace