PASSAGE RE-RANKING WITH BERT

PASSAGE RE-RANKING WITH BERT

14 Apr 2020 | Rodrigo Nogueira, Kyunghyun Cho
This paper presents a re-implementation of BERT for query-based passage re-ranking, achieving state-of-the-art results on the TREC-CAR dataset and the MS MARCO passage retrieval task. The authors argue that the combination of the MS MARCO passage ranking dataset and BERT, a powerful natural language processing model, has enabled significant progress in passage ranking tasks. The method involves using BERT as a binary classification model to estimate the relevance score of a candidate passage to a query, with the [CLS] vector serving as input. The model is fine-tuned using cross-entropy loss, and the training process is described in detail for both datasets. The results show that the proposed BERT-based models outperform previous state-of-the-art models by a large margin, even with a fraction of the training data. The code for reproducing the experiments is publicly available.This paper presents a re-implementation of BERT for query-based passage re-ranking, achieving state-of-the-art results on the TREC-CAR dataset and the MS MARCO passage retrieval task. The authors argue that the combination of the MS MARCO passage ranking dataset and BERT, a powerful natural language processing model, has enabled significant progress in passage ranking tasks. The method involves using BERT as a binary classification model to estimate the relevance score of a candidate passage to a query, with the [CLS] vector serving as input. The model is fine-tuned using cross-entropy loss, and the training process is described in detail for both datasets. The results show that the proposed BERT-based models outperform previous state-of-the-art models by a large margin, even with a fraction of the training data. The code for reproducing the experiments is publicly available.
Reach us at info@study.space
Understanding Passage Re-ranking with BERT