May 13–17, 2024, Singapore, Singapore | Dan Zhang, Yangliao Geng, Wenwen Gong, Zhongang Qi, Zhiyu Chen, Xing Tang, Ying Shan, Yuxiao Dong, Jie Tang
**RecDCL: Dual Contrastive Learning for Recommendation**
**Authors:** Dan Zhang
**Keywords:** Recommender Systems, Self-supervised Learning, Batch-wise Contrastive Learning, Feature-wise Contrastive Learning
**Abstract:**
Self-supervised learning (SSL) has achieved significant success in mining user-item intentions for collaborative filtering. Contrastive learning (CL) based SSL addresses data sparsity by contrasting embeddings between raw and augmented data. However, existing CL-based methods primarily focus on batch-wise contrastive learning, failing to leverage feature dimension regularity. This paper investigates the combination of batch-wise CL (BCL) and feature-wise CL (FCL) for recommendation. Theoretical analysis reveals that combining BCL and FCL helps eliminate redundant solutions without missing optimal solutions. We propose RecDCL, a dual contrastive learning framework. The FCL objective optimizes user-item positive pairs and uniform distributions within users and items using a polynomial kernel. The BCL objective enhances representation robustness by generating contrastive embeddings on output vectors. Extensive experiments on four benchmarks and one industry dataset demonstrate that RecDCL outperforms state-of-the-art GNNs-based and SSL-based models, achieving improvements of up to 5.65% in Recall@20.
**Contributions:**
- Theoretical analysis reveals the connection between BCL and FCL and demonstrates their cooperative benefits.
- RecDCL is proposed, integrating FCL and BCL objectives to learn informative representations.
- Extensive experiments validate the effectiveness of RecDCL, showing significant performance improvements over state-of-the-art models.
**Introduction:**
- BCL and FCL are two major types of contrastive learning objectives.
- BCL focuses on maximizing similarity between positive pairs and minimizing similarity between negative pairs.
- FCL emphasizes decorrelating embedding components in the feature-wise dimension.
- Previous works have explored the connection between BCL and FCL but lacked a native interpretation.
- This paper reveals the inherent connection between BCL and FCL and demonstrates their cooperative benefits.
**Methodology:**
- RecDCL combines FCL and BCL objectives to enhance representation learning.
- FCL objective (UIBT and UUII) captures alignment and uniformity in user-item interactions.
- BCL objective (Basic BCL and Advanced BCL) enhances representation robustness through data augmentation.
**Experiments:**
- RecDCL outperforms state-of-the-art models on four public datasets and an industrial dataset.
- Ablation studies validate the effectiveness of each component in RecDCL.
- Industrial results show significant improvements in Recall@20 and NDCG@20.
**Conclusion:**
RecDCL effectively combines BCL and FCL to learn informative representations for recommendation, demonstrating superior performance compared to state-of-the-art models.**RecDCL: Dual Contrastive Learning for Recommendation**
**Authors:** Dan Zhang
**Keywords:** Recommender Systems, Self-supervised Learning, Batch-wise Contrastive Learning, Feature-wise Contrastive Learning
**Abstract:**
Self-supervised learning (SSL) has achieved significant success in mining user-item intentions for collaborative filtering. Contrastive learning (CL) based SSL addresses data sparsity by contrasting embeddings between raw and augmented data. However, existing CL-based methods primarily focus on batch-wise contrastive learning, failing to leverage feature dimension regularity. This paper investigates the combination of batch-wise CL (BCL) and feature-wise CL (FCL) for recommendation. Theoretical analysis reveals that combining BCL and FCL helps eliminate redundant solutions without missing optimal solutions. We propose RecDCL, a dual contrastive learning framework. The FCL objective optimizes user-item positive pairs and uniform distributions within users and items using a polynomial kernel. The BCL objective enhances representation robustness by generating contrastive embeddings on output vectors. Extensive experiments on four benchmarks and one industry dataset demonstrate that RecDCL outperforms state-of-the-art GNNs-based and SSL-based models, achieving improvements of up to 5.65% in Recall@20.
**Contributions:**
- Theoretical analysis reveals the connection between BCL and FCL and demonstrates their cooperative benefits.
- RecDCL is proposed, integrating FCL and BCL objectives to learn informative representations.
- Extensive experiments validate the effectiveness of RecDCL, showing significant performance improvements over state-of-the-art models.
**Introduction:**
- BCL and FCL are two major types of contrastive learning objectives.
- BCL focuses on maximizing similarity between positive pairs and minimizing similarity between negative pairs.
- FCL emphasizes decorrelating embedding components in the feature-wise dimension.
- Previous works have explored the connection between BCL and FCL but lacked a native interpretation.
- This paper reveals the inherent connection between BCL and FCL and demonstrates their cooperative benefits.
**Methodology:**
- RecDCL combines FCL and BCL objectives to enhance representation learning.
- FCL objective (UIBT and UUII) captures alignment and uniformity in user-item interactions.
- BCL objective (Basic BCL and Advanced BCL) enhances representation robustness through data augmentation.
**Experiments:**
- RecDCL outperforms state-of-the-art models on four public datasets and an industrial dataset.
- Ablation studies validate the effectiveness of each component in RecDCL.
- Industrial results show significant improvements in Recall@20 and NDCG@20.
**Conclusion:**
RecDCL effectively combines BCL and FCL to learn informative representations for recommendation, demonstrating superior performance compared to state-of-the-art models.