The article "Explainable Recommendation: A Survey and New Perspectives" by Yongfeng Zhang and Xu Chen provides a comprehensive review of explainable recommendation systems. Explainable recommendation aims to generate recommendations that are not only high-quality but also provide intuitive explanations to users or system designers, addressing the "why" behind the recommendations. The authors categorize recommendation problems into five aspects: when, where, who, what, and why, with explainable recommendation focusing on the "why" aspect.
The survey covers the historical development of explainable recommendation, from early content-based and collaborative filtering methods to more recent model-based approaches. It introduces a two-dimensional taxonomy for classifying existing explainable recommendation methods: one dimension is the information source or display style of the explanations, and the other dimension is the algorithmic mechanism to generate these explanations.
The article discusses various types of explanations, including relevant user or item explanations, feature-based explanations, opinion-based explanations, sentence explanations, visual explanations, and social explanations. Each type is illustrated with examples and discussed in terms of their effectiveness and impact on user trust and satisfaction.
The evaluation of explainable recommendation methods is also covered, including user studies, online and offline evaluations, and qualitative case studies. The survey further explores the application of explainable recommendation in different contexts, such as e-commerce, point-of-interest, social, and multimedia recommendations.
Finally, the authors discuss open directions and new perspectives for future research, emphasizing the importance of improving transparency, persuasiveness, effectiveness, trustworthiness, and satisfaction in recommendation systems. They also highlight the need for better evaluation methods and user behavior analysis to enhance the explainability of AI systems.The article "Explainable Recommendation: A Survey and New Perspectives" by Yongfeng Zhang and Xu Chen provides a comprehensive review of explainable recommendation systems. Explainable recommendation aims to generate recommendations that are not only high-quality but also provide intuitive explanations to users or system designers, addressing the "why" behind the recommendations. The authors categorize recommendation problems into five aspects: when, where, who, what, and why, with explainable recommendation focusing on the "why" aspect.
The survey covers the historical development of explainable recommendation, from early content-based and collaborative filtering methods to more recent model-based approaches. It introduces a two-dimensional taxonomy for classifying existing explainable recommendation methods: one dimension is the information source or display style of the explanations, and the other dimension is the algorithmic mechanism to generate these explanations.
The article discusses various types of explanations, including relevant user or item explanations, feature-based explanations, opinion-based explanations, sentence explanations, visual explanations, and social explanations. Each type is illustrated with examples and discussed in terms of their effectiveness and impact on user trust and satisfaction.
The evaluation of explainable recommendation methods is also covered, including user studies, online and offline evaluations, and qualitative case studies. The survey further explores the application of explainable recommendation in different contexts, such as e-commerce, point-of-interest, social, and multimedia recommendations.
Finally, the authors discuss open directions and new perspectives for future research, emphasizing the importance of improving transparency, persuasiveness, effectiveness, trustworthiness, and satisfaction in recommendation systems. They also highlight the need for better evaluation methods and user behavior analysis to enhance the explainability of AI systems.