Neural Relation Extraction with Selective Attention over Instances

Neural Relation Extraction with Selective Attention over Instances

| Yankai Lin, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, Maosong Sun
This paper proposes a sentence-level attention-based convolutional neural network (CNN) for distant supervised relation extraction. The model uses CNN to embed sentence semantics and builds sentence-level attention over multiple instances to dynamically reduce the weights of noisy instances. Experimental results on real-world datasets show that the model effectively reduces the impact of wrong-labeled instances and achieves significant improvements in relation extraction compared to baselines. The model makes full use of all informative sentences and addresses the wrong labeling problem through selective attention. The contributions include: (1) utilizing all informative sentences for each entity pair, (2) using selective attention to de-emphasize noisy instances, and (3) demonstrating the effectiveness of selective attention in two types of CNN models. The model is evaluated on a dataset generated by aligning Freebase relations with the New York Times corpus. The results show that the model outperforms state-of-the-art feature-based methods and neural network methods in relation extraction. The model is implemented with sentence-level attention, which helps filter out irrelevant sentences and improve the accuracy of relation extraction. The paper also presents case studies showing how selective attention works in practice. The model is expected to be applied in other multi-instance learning tasks and text categorization. Future work includes exploring the model in other areas and integrating it with other neural network models for relation extraction.This paper proposes a sentence-level attention-based convolutional neural network (CNN) for distant supervised relation extraction. The model uses CNN to embed sentence semantics and builds sentence-level attention over multiple instances to dynamically reduce the weights of noisy instances. Experimental results on real-world datasets show that the model effectively reduces the impact of wrong-labeled instances and achieves significant improvements in relation extraction compared to baselines. The model makes full use of all informative sentences and addresses the wrong labeling problem through selective attention. The contributions include: (1) utilizing all informative sentences for each entity pair, (2) using selective attention to de-emphasize noisy instances, and (3) demonstrating the effectiveness of selective attention in two types of CNN models. The model is evaluated on a dataset generated by aligning Freebase relations with the New York Times corpus. The results show that the model outperforms state-of-the-art feature-based methods and neural network methods in relation extraction. The model is implemented with sentence-level attention, which helps filter out irrelevant sentences and improve the accuracy of relation extraction. The paper also presents case studies showing how selective attention works in practice. The model is expected to be applied in other multi-instance learning tasks and text categorization. Future work includes exploring the model in other areas and integrating it with other neural network models for relation extraction.
Reach us at info@study.space