14 Jun 2019 | Antoine Bosselut, Hannah Rashkin, Maarten Sap, Chaitanya Malaviya, Asli Celikyilmaz, Yejin Choi
The paper "COMET: Commonsense Transformers for Automatic Knowledge Graph Construction" presents a comprehensive study on automatically constructing knowledge bases for two prevalent commonsense knowledge graphs: ATOMIC and ConceptNet. Unlike traditional knowledge bases that store knowledge in canonical templates, commonsense knowledge graphs store loosely structured, open-text descriptions. The authors propose COMmonsense Transformers (COMET), a generative model that learns to generate rich and diverse commonsense descriptions in natural language. Despite the challenges of modeling commonsense, COMET demonstrates promising results when deep pre-trained language models are used to transfer implicit knowledge to generate explicit knowledge in these knowledge graphs. Empirical results show that COMET can generate novel knowledge rated as high quality by humans, achieving up to 77.5% precision at top 1 for ATOMIC and 91.7% for ConceptNet, approaching human performance. The findings suggest that generative commonsense models could soon be a plausible alternative to extractive methods for automatic commonsense knowledge base completion.The paper "COMET: Commonsense Transformers for Automatic Knowledge Graph Construction" presents a comprehensive study on automatically constructing knowledge bases for two prevalent commonsense knowledge graphs: ATOMIC and ConceptNet. Unlike traditional knowledge bases that store knowledge in canonical templates, commonsense knowledge graphs store loosely structured, open-text descriptions. The authors propose COMmonsense Transformers (COMET), a generative model that learns to generate rich and diverse commonsense descriptions in natural language. Despite the challenges of modeling commonsense, COMET demonstrates promising results when deep pre-trained language models are used to transfer implicit knowledge to generate explicit knowledge in these knowledge graphs. Empirical results show that COMET can generate novel knowledge rated as high quality by humans, achieving up to 77.5% precision at top 1 for ATOMIC and 91.7% for ConceptNet, approaching human performance. The findings suggest that generative commonsense models could soon be a plausible alternative to extractive methods for automatic commonsense knowledge base completion.