MGMap: Mask-Guided Learning for Online Vectorized HD Map Construction

MGMap: Mask-Guided Learning for Online Vectorized HD Map Construction

1 Apr 2024 | Xiaolu Liu, Song Wang, Wentong Li, Ruizi Yang, Junbo Chen, Jianke Zhu
MGMap is a mask-guided learning approach for online vectorized high-definition (HD) map construction. The method addresses the challenges of precise localization and detailed structure extraction in HD maps, particularly when dealing with subtle and sparse annotations. By leveraging learned masks, MGMap effectively highlights informative regions and achieves accurate map element localization. The approach employs a multi-scale bird's-eye-view (BEV) feature extraction with an enhanced multi-level neck to capture rich semantic and positional information. At the instance level, a Mask-Activated Instance (MAI) decoder is designed to incorporate global instance and structural information into instance queries, while at the point level, a Position-Guided Mask Patch Refinement (PG-MPR) module refines point locations from a finer-grained perspective. The MAI decoder activates instance queries using learned instance masks, enabling the extraction of global instance structures and shape characteristics. The PG-MPR module focuses on specific patch regions to extract binary mask features, which are used to gather detailed information around lane locations. Extensive experiments on the nuScenes and Argoverse2 datasets demonstrate that MGMap achieves state-of-the-art performance, with significant improvements in map vectorization accuracy. The method shows strong robustness and generalization capabilities across different input modalities and settings. MGMap outperforms existing approaches, achieving a 10 mAP improvement for different input modalities. The proposed approach is implemented with a mask-guided design that enhances the localization and representation of map elements, leading to more accurate and detailed HD map construction. The method is effective in handling the challenges of online HD map construction, including the need for real-time updates and the preservation of detailed road scene information.MGMap is a mask-guided learning approach for online vectorized high-definition (HD) map construction. The method addresses the challenges of precise localization and detailed structure extraction in HD maps, particularly when dealing with subtle and sparse annotations. By leveraging learned masks, MGMap effectively highlights informative regions and achieves accurate map element localization. The approach employs a multi-scale bird's-eye-view (BEV) feature extraction with an enhanced multi-level neck to capture rich semantic and positional information. At the instance level, a Mask-Activated Instance (MAI) decoder is designed to incorporate global instance and structural information into instance queries, while at the point level, a Position-Guided Mask Patch Refinement (PG-MPR) module refines point locations from a finer-grained perspective. The MAI decoder activates instance queries using learned instance masks, enabling the extraction of global instance structures and shape characteristics. The PG-MPR module focuses on specific patch regions to extract binary mask features, which are used to gather detailed information around lane locations. Extensive experiments on the nuScenes and Argoverse2 datasets demonstrate that MGMap achieves state-of-the-art performance, with significant improvements in map vectorization accuracy. The method shows strong robustness and generalization capabilities across different input modalities and settings. MGMap outperforms existing approaches, achieving a 10 mAP improvement for different input modalities. The proposed approach is implemented with a mask-guided design that enhances the localization and representation of map elements, leading to more accurate and detailed HD map construction. The method is effective in handling the challenges of online HD map construction, including the need for real-time updates and the preservation of detailed road scene information.
Reach us at info@study.space