c-ICL: Contrastive In-context Learning for Information Extraction

c-ICL: Contrastive In-context Learning for Information Extraction

24 Jun 2024 | Ying Mo1, Jiahao Liu2, Jian Yang1*, Qifan Wang3 Shun Zhang1, Jingang Wang2, Zhoujun Li1*
The paper introduces c-ICL (Contrastive In-context Learning), a novel few-shot technique for information extraction (IE) tasks, specifically named entity recognition (NER) and relation extraction (RE). c-ICL leverages both correct and incorrect sample constructions to create in-context learning demonstrations, enhancing the ability of large language models (LLMs) to extract entities and relations. The method incorporates hard negative samples and their nearest positive neighbors to provide valuable context and reasoning behind the positive samples. Experiments on various datasets show that c-ICL outperforms previous few-shot in-context learning methods, demonstrating its effectiveness in improving performance across a broad spectrum of related tasks. The key contributions include the development of c-ICL, the selection of hard negative samples, and comprehensive experiments on benchmarks, establishing new state-of-the-art results. The paper also discusses the limitations of the approach, such as its focus on specific IE tasks and the need for further exploration of retrieval strategies for positive and negative samples.The paper introduces c-ICL (Contrastive In-context Learning), a novel few-shot technique for information extraction (IE) tasks, specifically named entity recognition (NER) and relation extraction (RE). c-ICL leverages both correct and incorrect sample constructions to create in-context learning demonstrations, enhancing the ability of large language models (LLMs) to extract entities and relations. The method incorporates hard negative samples and their nearest positive neighbors to provide valuable context and reasoning behind the positive samples. Experiments on various datasets show that c-ICL outperforms previous few-shot in-context learning methods, demonstrating its effectiveness in improving performance across a broad spectrum of related tasks. The key contributions include the development of c-ICL, the selection of hard negative samples, and comprehensive experiments on benchmarks, establishing new state-of-the-art results. The paper also discusses the limitations of the approach, such as its focus on specific IE tasks and the need for further exploration of retrieval strategies for positive and negative samples.
Reach us at info@study.space