Self-supervised Learning: Generative or Contrastive

Self-supervised Learning: Generative or Contrastive

20 Mar 2021 | Xiao Liu, Fanjin Zhang, Zhenyu Hou, Li Mian, Zhaoyu Wang, Jing Zhang, Jie Tang*
Self-supervised learning (SSL) has gained significant attention as an alternative to traditional supervised learning due to its data efficiency and ability to learn representations without manual labels. This survey provides a comprehensive overview of SSL methods in computer vision, natural language processing, and graph learning, categorizing them into generative, contrastive, and generative-contrastive (adversarial) approaches. Generative methods, such as auto-regressive (AR) models, flow-based models, and auto-encoding (AE) models, aim to reconstruct input data from latent representations. Contrastive methods focus on measuring similarity between representations, while generative-contrastive methods combine both objectives. The survey also discusses theoretical foundations of SSL, including connections to generative adversarial networks (GANs) and mutual information maximization. Open problems and future directions for SSL are identified, emphasizing the need for better handling of high-level abstraction and robustness to data distribution shifts. The survey highlights the effectiveness of SSL in various tasks, including image classification, language modeling, and graph learning, and discusses the advantages and limitations of different SSL paradigms. Overall, SSL offers a promising approach to learning representations without reliance on labeled data, with ongoing research aimed at improving its performance and generalization capabilities.Self-supervised learning (SSL) has gained significant attention as an alternative to traditional supervised learning due to its data efficiency and ability to learn representations without manual labels. This survey provides a comprehensive overview of SSL methods in computer vision, natural language processing, and graph learning, categorizing them into generative, contrastive, and generative-contrastive (adversarial) approaches. Generative methods, such as auto-regressive (AR) models, flow-based models, and auto-encoding (AE) models, aim to reconstruct input data from latent representations. Contrastive methods focus on measuring similarity between representations, while generative-contrastive methods combine both objectives. The survey also discusses theoretical foundations of SSL, including connections to generative adversarial networks (GANs) and mutual information maximization. Open problems and future directions for SSL are identified, emphasizing the need for better handling of high-level abstraction and robustness to data distribution shifts. The survey highlights the effectiveness of SSL in various tasks, including image classification, language modeling, and graph learning, and discusses the advantages and limitations of different SSL paradigms. Overall, SSL offers a promising approach to learning representations without reliance on labeled data, with ongoing research aimed at improving its performance and generalization capabilities.
Reach us at info@study.space
Understanding Self-Supervised Learning%3A Generative or Contrastive