This paper proposes a novel explainable framework for path-based recommendation, called Counterfactual Path-based Explainable Recommendation (CPER), which replaces traditional attention mechanisms with counterfactual reasoning to improve the explainability of recommendation models. The main idea is to learn explainable path weights by perturbing paths and observing the impact on recommendation scores. Two counterfactual reasoning algorithms are designed: one based on path representations and another on path topological structures. The first method learns perturbation factors on path embeddings to minimize the perturbation while maximizing the impact on recommendation scores. The second method uses reinforcement learning to manipulate path structures and find optimal path manipulations. Additionally, a package of explainability evaluation solutions is proposed, combining qualitative and quantitative methods to assess the effectiveness of explainable paths. The framework is evaluated on four real-world datasets, demonstrating its effectiveness and reliability. The results show that CPER outperforms traditional attention-based methods in terms of stability, effectiveness, and confidence. The paper also discusses the limitations of attention-based explanations and highlights the advantages of counterfactual reasoning in capturing informative paths for explanation. The proposed framework provides a more reliable and interpretable way to explain recommendation results, making it suitable for applications where explainability is crucial.This paper proposes a novel explainable framework for path-based recommendation, called Counterfactual Path-based Explainable Recommendation (CPER), which replaces traditional attention mechanisms with counterfactual reasoning to improve the explainability of recommendation models. The main idea is to learn explainable path weights by perturbing paths and observing the impact on recommendation scores. Two counterfactual reasoning algorithms are designed: one based on path representations and another on path topological structures. The first method learns perturbation factors on path embeddings to minimize the perturbation while maximizing the impact on recommendation scores. The second method uses reinforcement learning to manipulate path structures and find optimal path manipulations. Additionally, a package of explainability evaluation solutions is proposed, combining qualitative and quantitative methods to assess the effectiveness of explainable paths. The framework is evaluated on four real-world datasets, demonstrating its effectiveness and reliability. The results show that CPER outperforms traditional attention-based methods in terms of stability, effectiveness, and confidence. The paper also discusses the limitations of attention-based explanations and highlights the advantages of counterfactual reasoning in capturing informative paths for explanation. The proposed framework provides a more reliable and interpretable way to explain recommendation results, making it suitable for applications where explainability is crucial.