April 19 - 23, 2021 | Gautier Izacard, Edouard Grave
This paper presents a method for open-domain question answering (ODQA) that combines passage retrieval with generative models. The approach involves retrieving relevant text passages from an external knowledge source, such as Wikipedia, and then using a sequence-to-sequence model to generate the answer. The method is evaluated on the Natural Questions (NQ) and TriviaQA benchmarks, achieving state-of-the-art results. The performance of the method improves significantly as the number of retrieved passages increases, indicating that generative models are effective at combining evidence from multiple passages.
The paper compares the proposed method with existing approaches and finds that it outperforms extractive methods and other generative models. It also shows that using retrieval-based methods can lead to significant performance gains, even when the model has fewer parameters. The method is scalable, as it can handle a large number of passages without a significant increase in computational cost. The model is trained using a sequence-to-sequence architecture, with the decoder performing evidence fusion. The results show that the method achieves high accuracy on both NQ and TriviaQA datasets, with the best performance achieved when using 100 retrieved passages. The paper concludes that the proposed method is a promising approach for ODQA, and that further research is needed to improve its efficiency and integrate retrieval into the model.This paper presents a method for open-domain question answering (ODQA) that combines passage retrieval with generative models. The approach involves retrieving relevant text passages from an external knowledge source, such as Wikipedia, and then using a sequence-to-sequence model to generate the answer. The method is evaluated on the Natural Questions (NQ) and TriviaQA benchmarks, achieving state-of-the-art results. The performance of the method improves significantly as the number of retrieved passages increases, indicating that generative models are effective at combining evidence from multiple passages.
The paper compares the proposed method with existing approaches and finds that it outperforms extractive methods and other generative models. It also shows that using retrieval-based methods can lead to significant performance gains, even when the model has fewer parameters. The method is scalable, as it can handle a large number of passages without a significant increase in computational cost. The model is trained using a sequence-to-sequence architecture, with the decoder performing evidence fusion. The results show that the method achieves high accuracy on both NQ and TriviaQA datasets, with the best performance achieved when using 100 retrieved passages. The paper concludes that the proposed method is a promising approach for ODQA, and that further research is needed to improve its efficiency and integrate retrieval into the model.