9 Mar 2024 | Chen Li1, Haotian Zheng1, Yiping Sun1, Cangqing Wang1 Liqiang Yu2, Che Chang2, Xinyu Tian2 and Bo Liu*
This paper explores the application of reinforcement learning (RL) strategies, particularly the REINFORCE algorithm, to enhance multi-hop Knowledge Graph Reasoning (KG-R). The study addresses the challenges posed by the incompleteness of Knowledge Graphs (KGs), which often lead to incorrect inferential outcomes. By dividing the Unified Medical Language System (UMLS) dataset into rich and sparse subsets, the authors investigate the effectiveness of pre-trained BERT embeddings and Prompt Learning methodologies in refining the reward shaping process. This approach aims to improve the precision of multi-hop KG-R and set a new standard for future research in the field. The study contributes a novel perspective to KG reasoning, offering methodological advancements that align with academic rigor and scholarly aspirations. The empirical results demonstrate significant improvements in the performance of RL agents, particularly when using prompt learning-based reward shaping modules pre-trained on densely populated knowledge graphs. The findings suggest a complex interplay between embedding techniques and their effectiveness in sparse knowledge graph environments, highlighting the need for further investigation into effective reward shaping mechanisms.This paper explores the application of reinforcement learning (RL) strategies, particularly the REINFORCE algorithm, to enhance multi-hop Knowledge Graph Reasoning (KG-R). The study addresses the challenges posed by the incompleteness of Knowledge Graphs (KGs), which often lead to incorrect inferential outcomes. By dividing the Unified Medical Language System (UMLS) dataset into rich and sparse subsets, the authors investigate the effectiveness of pre-trained BERT embeddings and Prompt Learning methodologies in refining the reward shaping process. This approach aims to improve the precision of multi-hop KG-R and set a new standard for future research in the field. The study contributes a novel perspective to KG reasoning, offering methodological advancements that align with academic rigor and scholarly aspirations. The empirical results demonstrate significant improvements in the performance of RL agents, particularly when using prompt learning-based reward shaping modules pre-trained on densely populated knowledge graphs. The findings suggest a complex interplay between embedding techniques and their effectiveness in sparse knowledge graph environments, highlighting the need for further investigation into effective reward shaping mechanisms.