This paper provides a comprehensive overview of Shapley value-based attribution methods in explainable artificial intelligence (XAI). It begins by outlining the foundational theory of Shapley value, rooted in cooperative game theory, and discusses its desirable properties. The paper proposes a three-dimensional classification framework for existing Shapley value-based feature attribution methods: Shapley value type, feature replacement method, and approximation method. This framework enhances comprehension and aids in identifying relevant algorithms. The practical application of Shapley values at different stages of ML model development, including pre-modeling, modeling, and post-modeling, is emphasized. The paper also summarizes the limitations associated with Shapley value and discusses potential directions for future research, such as model diagnosis and optimization. Despite the limitations, Shapley value remains a theoretically comprehensive method for feature attribution, offering significant potential in improving model interpretability and performance.This paper provides a comprehensive overview of Shapley value-based attribution methods in explainable artificial intelligence (XAI). It begins by outlining the foundational theory of Shapley value, rooted in cooperative game theory, and discusses its desirable properties. The paper proposes a three-dimensional classification framework for existing Shapley value-based feature attribution methods: Shapley value type, feature replacement method, and approximation method. This framework enhances comprehension and aids in identifying relevant algorithms. The practical application of Shapley values at different stages of ML model development, including pre-modeling, modeling, and post-modeling, is emphasized. The paper also summarizes the limitations associated with Shapley value and discusses potential directions for future research, such as model diagnosis and optimization. Despite the limitations, Shapley value remains a theoretically comprehensive method for feature attribution, offering significant potential in improving model interpretability and performance.