LoFTR: Detector-Free Local Feature Matching with Transformers

LoFTR: Detector-Free Local Feature Matching with Transformers

1 Apr 2021 | Jiaming Sun, Zehong Shen, Yuang Wang, Hujun Bao, Xiaowei Zhou
LoFTR is a novel detector-free method for local image feature matching, which aims to address the repeatability issue of feature detectors in low-texture or repetitive pattern regions. Instead of performing feature detection, description, and matching sequentially, LoFTR establishes pixel-wise dense matches at a coarse level and refines them at a fine level. Unlike dense methods that use a cost volume to search correspondences, LoFTR employs self and cross attention layers in Transformers to obtain feature descriptors conditioned on both images. The global receptive field provided by Transformers enables LoFTR to produce dense matches in low-texture areas, where feature detectors often struggle to produce repeatable interest points. Experiments on indoor and outdoor datasets show that LoFTR outperforms state-of-the-art methods by a large margin, ranking first on two public benchmarks of visual localization. The code for LoFTR is available at <https://zju3dv.github.io/loftr/>.LoFTR is a novel detector-free method for local image feature matching, which aims to address the repeatability issue of feature detectors in low-texture or repetitive pattern regions. Instead of performing feature detection, description, and matching sequentially, LoFTR establishes pixel-wise dense matches at a coarse level and refines them at a fine level. Unlike dense methods that use a cost volume to search correspondences, LoFTR employs self and cross attention layers in Transformers to obtain feature descriptors conditioned on both images. The global receptive field provided by Transformers enables LoFTR to produce dense matches in low-texture areas, where feature detectors often struggle to produce repeatable interest points. Experiments on indoor and outdoor datasets show that LoFTR outperforms state-of-the-art methods by a large margin, ranking first on two public benchmarks of visual localization. The code for LoFTR is available at <https://zju3dv.github.io/loftr/>.
Reach us at info@study.space