The NeRFect Match: Exploring NeRF Features for Visual Localization

The NeRFect Match: Exploring NeRF Features for Visual Localization

21 Aug 2024 | Qunjie Zhou1*, Maxim Maximov2*, Or Litany13*, and Laura Leal-Taixé1*
This paper explores the use of Neural Radiance Fields (NeRF) as a scene representation for visual localization. The authors propose NeRFMatch, a 2D-3D matching function that leverages the internal features of NeRF to establish precise correspondences between a query image and 3D scene points. They conduct a comprehensive examination of NeRF's implicit knowledge, including different matching network architectures, encoder feature extraction at multiple layers, and training configurations. The evaluation on standard localization benchmarks, using a structure-based pipeline, demonstrates competitive results for localization performance on the Cambridge Landmarks dataset. The paper also discusses future work to improve indoor localization performance and highlights the limitations of the approach, particularly in indoor scenes. The authors release all models and code to facilitate further research.This paper explores the use of Neural Radiance Fields (NeRF) as a scene representation for visual localization. The authors propose NeRFMatch, a 2D-3D matching function that leverages the internal features of NeRF to establish precise correspondences between a query image and 3D scene points. They conduct a comprehensive examination of NeRF's implicit knowledge, including different matching network architectures, encoder feature extraction at multiple layers, and training configurations. The evaluation on standard localization benchmarks, using a structure-based pipeline, demonstrates competitive results for localization performance on the Cambridge Landmarks dataset. The paper also discusses future work to improve indoor localization performance and highlights the limitations of the approach, particularly in indoor scenes. The authors release all models and code to facilitate further research.
Reach us at info@study.space