01/2024 | Maxime Oquab**, Timothée Dar cet**, Théo Moutakanni**, Huy V. Vo*, Marc Szafraniec*, Vasil Khalidov*, Pierre Fernandez, Daniel Haziza, Francisco Massa, Alaaeldin El-Nouby, Mahmoud Assran, Nicolas Ballas, Wojciech Galuba, Russell Howes, Po-Yao Huang, Shang-Wen Li, Ishan Misra, Michael Rabbat, Vasu Sharma, Gabriel Synnaeve, Hu Xu, Hervé Jegou, Julien Mairal1, Patrick Labatut*, Armand Joulin*, Piotr Bojanowski*
The paper "DINOv2: Learning Robust Visual Features without Supervision" by Maxime Oquab et al. explores the development of general-purpose visual features using self-supervised learning. The authors argue that existing pretraining methods, particularly self-supervised approaches, can produce such features when trained on a large curated dataset from diverse sources. They propose an automatic pipeline to build a diverse and curated image dataset, which they call LVD-142M, and train a Vision Transformer (ViT) model with 1 billion parameters. The trained model is then distilled into smaller models that outperform the best available general-purpose features, OpenCLIP, on various benchmarks at both the image and pixel levels.
The paper revisits existing discriminative self-supervised approaches, such as iBOT, and makes several technical contributions to stabilize and accelerate the training process. These improvements include a fast and memory-efficient attention mechanism, sequence packing, efficient stochastic depth, and model distillation. The authors also present ablation studies to validate the effectiveness of their proposed techniques.
Empirical evaluations on various computer vision tasks, including image classification, instance recognition, semantic segmentation, and depth estimation, demonstrate that DINOv2's features significantly outperform state-of-the-art self-supervised and weakly-supervised models. The results show strong generalization capabilities, with improved performance on unseen domains and better robustness to domain generalization benchmarks. Overall, the paper highlights the potential of self-supervised learning for generating transferable and robust visual features without the need for fine-tuning.The paper "DINOv2: Learning Robust Visual Features without Supervision" by Maxime Oquab et al. explores the development of general-purpose visual features using self-supervised learning. The authors argue that existing pretraining methods, particularly self-supervised approaches, can produce such features when trained on a large curated dataset from diverse sources. They propose an automatic pipeline to build a diverse and curated image dataset, which they call LVD-142M, and train a Vision Transformer (ViT) model with 1 billion parameters. The trained model is then distilled into smaller models that outperform the best available general-purpose features, OpenCLIP, on various benchmarks at both the image and pixel levels.
The paper revisits existing discriminative self-supervised approaches, such as iBOT, and makes several technical contributions to stabilize and accelerate the training process. These improvements include a fast and memory-efficient attention mechanism, sequence packing, efficient stochastic depth, and model distillation. The authors also present ablation studies to validate the effectiveness of their proposed techniques.
Empirical evaluations on various computer vision tasks, including image classification, instance recognition, semantic segmentation, and depth estimation, demonstrate that DINOv2's features significantly outperform state-of-the-art self-supervised and weakly-supervised models. The results show strong generalization capabilities, with improved performance on unseen domains and better robustness to domain generalization benchmarks. Overall, the paper highlights the potential of self-supervised learning for generating transferable and robust visual features without the need for fine-tuning.