The paper "Attention Is Not the Only Choice: Counterfactual Reasoning for Path-Based Explainable Recommendation" by Yicong Li et al. addresses the issue of explainability in recommendation models, particularly in graph-based recommendations. Traditional attention mechanisms, while effective for model accuracy, often fail to provide stable and intuitive explanations due to their instability and tendency to overweight common paths. To overcome these limitations, the authors propose a novel framework called Counterfactual Path-based Explainable Recommendation (CPER) that leverages counterfactual reasoning to learn explainable weights for paths.
CPER introduces two counterfactual reasoning algorithms: one based on path representation and another on path topological structure. The path representation method learns perturbation factors on path embeddings, while the path topological structure method uses reinforcement learning to manipulate paths. The authors also propose a comprehensive evaluation framework that includes both qualitative and quantitative methods to assess the explainability of the learned paths.
Extensive experiments on four real-world datasets demonstrate the effectiveness and reliability of CPER. The results show that CPER outperforms traditional attention-based explanations in terms of stability, effectiveness, and confidence. Additionally, CPER achieves superior recommendation performance compared to state-of-the-art baselines, despite its focus on explainability. The paper concludes with a detailed analysis of the effectiveness of different components of the CPER model and a comparison with related work in explainable recommendations and counterfactual reasoning.The paper "Attention Is Not the Only Choice: Counterfactual Reasoning for Path-Based Explainable Recommendation" by Yicong Li et al. addresses the issue of explainability in recommendation models, particularly in graph-based recommendations. Traditional attention mechanisms, while effective for model accuracy, often fail to provide stable and intuitive explanations due to their instability and tendency to overweight common paths. To overcome these limitations, the authors propose a novel framework called Counterfactual Path-based Explainable Recommendation (CPER) that leverages counterfactual reasoning to learn explainable weights for paths.
CPER introduces two counterfactual reasoning algorithms: one based on path representation and another on path topological structure. The path representation method learns perturbation factors on path embeddings, while the path topological structure method uses reinforcement learning to manipulate paths. The authors also propose a comprehensive evaluation framework that includes both qualitative and quantitative methods to assess the explainability of the learned paths.
Extensive experiments on four real-world datasets demonstrate the effectiveness and reliability of CPER. The results show that CPER outperforms traditional attention-based explanations in terms of stability, effectiveness, and confidence. Additionally, CPER achieves superior recommendation performance compared to state-of-the-art baselines, despite its focus on explainability. The paper concludes with a detailed analysis of the effectiveness of different components of the CPER model and a comparison with related work in explainable recommendations and counterfactual reasoning.