23 Mar 2020 | Kaiming He Haoqi Fan Yuxin Wu Saining Xie Ross Girshick
Momentum Contrast (MoCo) is a novel approach for unsupervised visual representation learning, designed to build a dynamic dictionary using a queue and a moving-averaged encoder. This method enables the learning of large and consistent dictionaries on-the-fly, facilitating effective contrastive unsupervised learning. MoCo achieves competitive results on the ImageNet classification task and demonstrates superior performance in downstream tasks such as object detection and segmentation, often outperforming supervised pre-training methods. The key contributions of MoCo include its ability to maintain a large and consistent dictionary, which is crucial for learning rich and diverse visual representations. The method is flexible and can be applied to various pretext tasks, making it a versatile tool for unsupervised learning in computer vision.Momentum Contrast (MoCo) is a novel approach for unsupervised visual representation learning, designed to build a dynamic dictionary using a queue and a moving-averaged encoder. This method enables the learning of large and consistent dictionaries on-the-fly, facilitating effective contrastive unsupervised learning. MoCo achieves competitive results on the ImageNet classification task and demonstrates superior performance in downstream tasks such as object detection and segmentation, often outperforming supervised pre-training methods. The key contributions of MoCo include its ability to maintain a large and consistent dictionary, which is crucial for learning rich and diverse visual representations. The method is flexible and can be applied to various pretext tasks, making it a versatile tool for unsupervised learning in computer vision.