Retrieval-Augmented Generation with Knowledge Graphs for Customer Service Question Answering

Retrieval-Augmented Generation with Knowledge Graphs for Customer Service Question Answering

July 14–18, 2024 | Zhentao Xu, Mark Jerome Cruz, Matthew Guevara, Tie Wang, Manasi Deshpande, Xiaofeng Wang, Zheng Li
The paper introduces a novel method for customer service question answering that integrates retrieval-augmented generation (RAG) with a knowledge graph (KG). Traditional RAG methods treat historical issue tickets as plain text, neglecting their internal structure and inter-issue relations, which limits performance. The proposed method constructs a KG from historical issues, preserving intra-issue structure and inter-issue relations. During question-answering, the system parses consumer queries, retrieves relevant sub-graphs from the KG, and generates answers. This approach improves retrieval accuracy and answering quality by maintaining logical coherence and mitigating the effects of text segmentation. Empirical evaluations on benchmark datasets show that the method outperforms the baseline by 77.6% in Mean Reciprocal Rank (MRR) and by 0.32 in BLEU score. The system has been deployed at LinkedIn's customer service team, reducing the median resolution time per issue by 28.6%. Future work will focus on enhancing system adaptability, dynamic updates to the KG, and expanding the system's applicability beyond customer service.The paper introduces a novel method for customer service question answering that integrates retrieval-augmented generation (RAG) with a knowledge graph (KG). Traditional RAG methods treat historical issue tickets as plain text, neglecting their internal structure and inter-issue relations, which limits performance. The proposed method constructs a KG from historical issues, preserving intra-issue structure and inter-issue relations. During question-answering, the system parses consumer queries, retrieves relevant sub-graphs from the KG, and generates answers. This approach improves retrieval accuracy and answering quality by maintaining logical coherence and mitigating the effects of text segmentation. Empirical evaluations on benchmark datasets show that the method outperforms the baseline by 77.6% in Mean Reciprocal Rank (MRR) and by 0.32 in BLEU score. The system has been deployed at LinkedIn's customer service team, reducing the median resolution time per issue by 28.6%. Future work will focus on enhancing system adaptability, dynamic updates to the KG, and expanding the system's applicability beyond customer service.
Reach us at info@study.space