VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning

VICReg: Variance-Invariance-Covariance Regularization for Self-Supervised Learning

28 Jan 2022 | Adrien Bardes, Jean Ponce, Yann LeCun
VICReg (Variance-Invariance-Covariance Regularization) is a self-supervised learning method designed to prevent collapse in joint embedding architectures. The main challenge in such architectures is to avoid the encoders producing constant or non-informative vectors. VICReg introduces two regularizations: (1) a term that maintains the variance of each embedding dimension above a threshold, and (2) a term that decorrelates each pair of variables. Unlike other methods, VICReg does not require techniques like weight sharing, batch normalization, feature-wise normalization, output quantization, stop gradient, memory banks, etc., and achieves state-of-the-art results on several downstream tasks. The method is more generally applicable because it does not require shared weights or identical architectures between the two branches. VICReg is evaluated on ImageNet and other downstream tasks, demonstrating its effectiveness and stability. The paper also shows that incorporating variance preservation into other self-supervised methods improves performance.VICReg (Variance-Invariance-Covariance Regularization) is a self-supervised learning method designed to prevent collapse in joint embedding architectures. The main challenge in such architectures is to avoid the encoders producing constant or non-informative vectors. VICReg introduces two regularizations: (1) a term that maintains the variance of each embedding dimension above a threshold, and (2) a term that decorrelates each pair of variables. Unlike other methods, VICReg does not require techniques like weight sharing, batch normalization, feature-wise normalization, output quantization, stop gradient, memory banks, etc., and achieves state-of-the-art results on several downstream tasks. The method is more generally applicable because it does not require shared weights or identical architectures between the two branches. VICReg is evaluated on ImageNet and other downstream tasks, demonstrating its effectiveness and stability. The paper also shows that incorporating variance preservation into other self-supervised methods improves performance.
Reach us at info@study.space
[slides] VICReg%3A Variance-Invariance-Covariance Regularization for Self-Supervised Learning | StudySpace