This paper introduces *MapQR*, an end-to-end method for constructing online vectorized maps in autonomous driving. The method focuses on enhancing query capabilities to efficiently probe desirable information from bird's-eye-view (BEV) features. *MapQR* employs a novel *scatter-and-gather* query design, which explicitly models content and position parts of each map instance. The base map instance queries are scattered to different reference points and combined with positional embeddings to enhance information within each map instance. The proposed method achieves the best mean average precision (mAP) on both nuScenes and Argoverse 2 datasets while maintaining good efficiency. Additionally, integrating the query design into other models significantly improves their performance. The source code is available at <https://github.com/HXMap/MapQR>.
- High-definition (HD) maps are crucial for autonomous driving, encapsulating precise vectorized details of map elements.
- Traditional SLAM-based HD map construction methods face challenges such as complex pipelines, high costs, and localization errors.
- Online, learning-based methods that utilize onboard sensors are gaining attention to overcome these limitations.
- *MapQR* proposes a novel *scatter-and-gather* query design, which enhances the efficiency and accuracy of map element detection.
- The method achieves superior performance on existing benchmarks and integrates well with other state-of-the-art models.
- Online vectorized HD map construction
- Multi-view camera-to-BEV transformation
- Detection transformers
- Overall architecture: The model takes sequences of multi-view images as inputs to construct an HD map end-to-end.
- Decoder with *scatter-and-gather* query: The decoder consists of stacked transformer layers, with a key design being the *scatter-and-gather* query and its compatible positional embedding.
- BEV Encoder: An improved GKT encoder with flexible height is used to adapt to 3D space.
- Extensive experiments demonstrate the superiority of *MapQR* on nuScenes and Argoverse 2 datasets.
- Ablation studies validate the effectiveness of the *scatter-and-gather* query and positional embedding.
- The method outperforms state-of-the-art methods in both mAP and inference speed.
- *MapQR* effectively enhances the query mechanism for online map construction, achieving superior performance and efficiency.This paper introduces *MapQR*, an end-to-end method for constructing online vectorized maps in autonomous driving. The method focuses on enhancing query capabilities to efficiently probe desirable information from bird's-eye-view (BEV) features. *MapQR* employs a novel *scatter-and-gather* query design, which explicitly models content and position parts of each map instance. The base map instance queries are scattered to different reference points and combined with positional embeddings to enhance information within each map instance. The proposed method achieves the best mean average precision (mAP) on both nuScenes and Argoverse 2 datasets while maintaining good efficiency. Additionally, integrating the query design into other models significantly improves their performance. The source code is available at <https://github.com/HXMap/MapQR>.
- High-definition (HD) maps are crucial for autonomous driving, encapsulating precise vectorized details of map elements.
- Traditional SLAM-based HD map construction methods face challenges such as complex pipelines, high costs, and localization errors.
- Online, learning-based methods that utilize onboard sensors are gaining attention to overcome these limitations.
- *MapQR* proposes a novel *scatter-and-gather* query design, which enhances the efficiency and accuracy of map element detection.
- The method achieves superior performance on existing benchmarks and integrates well with other state-of-the-art models.
- Online vectorized HD map construction
- Multi-view camera-to-BEV transformation
- Detection transformers
- Overall architecture: The model takes sequences of multi-view images as inputs to construct an HD map end-to-end.
- Decoder with *scatter-and-gather* query: The decoder consists of stacked transformer layers, with a key design being the *scatter-and-gather* query and its compatible positional embedding.
- BEV Encoder: An improved GKT encoder with flexible height is used to adapt to 3D space.
- Extensive experiments demonstrate the superiority of *MapQR* on nuScenes and Argoverse 2 datasets.
- Ablation studies validate the effectiveness of the *scatter-and-gather* query and positional embedding.
- The method outperforms state-of-the-art methods in both mAP and inference speed.
- *MapQR* effectively enhances the query mechanism for online map construction, achieving superior performance and efficiency.