4 Sep 2019 | Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H. Miller, Sebastian Riedel
Language models, such as BERT, can store relational knowledge and answer "fill-in-the-blank" questions without fine-tuning. This study evaluates the relational knowledge in state-of-the-art pretrained language models, finding that BERT performs competitively with traditional knowledge bases in tasks like open-domain question answering. BERT-large outperforms other models in factual and commonsense knowledge retrieval and achieves high precision in open-domain QA. The LAMA probe, which tests language models on factual and commonsense knowledge, shows that BERT-large can recall factual knowledge without fine-tuning, suggesting its potential as an unsupervised QA system. The study compares BERT with other models and baselines, including relation extraction systems and DrQA, and finds that BERT-large performs well across various knowledge sources. The results indicate that language models can effectively capture factual and commonsense knowledge, offering a viable alternative to traditional knowledge bases. The study highlights the importance of understanding how language models capture knowledge and their potential for future applications in NLP.Language models, such as BERT, can store relational knowledge and answer "fill-in-the-blank" questions without fine-tuning. This study evaluates the relational knowledge in state-of-the-art pretrained language models, finding that BERT performs competitively with traditional knowledge bases in tasks like open-domain question answering. BERT-large outperforms other models in factual and commonsense knowledge retrieval and achieves high precision in open-domain QA. The LAMA probe, which tests language models on factual and commonsense knowledge, shows that BERT-large can recall factual knowledge without fine-tuning, suggesting its potential as an unsupervised QA system. The study compares BERT with other models and baselines, including relation extraction systems and DrQA, and finds that BERT-large performs well across various knowledge sources. The results indicate that language models can effectively capture factual and commonsense knowledge, offering a viable alternative to traditional knowledge bases. The study highlights the importance of understanding how language models capture knowledge and their potential for future applications in NLP.