Self-supervised Graph Learning (SGL) for Recommendation: This paper proposes a new learning paradigm for recommendation systems that combines self-supervised learning with graph-based methods. The goal is to improve the accuracy and robustness of graph convolution networks (GCNs) for recommendation. The key idea is to supplement the classical supervised task with an auxiliary self-supervised task that reinforces node representation learning through self-discrimination. The authors introduce three operators—node dropout, edge dropout, and random walk—to generate different views of nodes, which are then used in contrastive learning to maximize agreement between views of the same node and minimize agreement between views of different nodes. Theoretical analyses show that SGL has the ability to automatically mine hard negatives, which improves the performance and accelerates training. Empirical studies on three benchmark datasets demonstrate that SGL significantly improves recommendation accuracy, especially for long-tail items, and enhances robustness against interaction noises. The proposed method is implemented on the LightGCN model and is available at https://github.com/wujcan/SGL. The work addresses limitations of existing GCN-based models, including sparse supervision signals, skewed data distributions, and noisy interactions. SGL is model-agnostic and can be applied to any graph-based model with user and/or item embeddings. The results show that SGL outperforms existing methods in terms of recommendation accuracy and robustness. The paper also discusses the theoretical foundations of SGL, including its ability to mine hard negatives and its impact on training efficiency. The authors conclude that SGL is a promising approach for recommendation systems and suggest future work to further integrate SSL with recommendation tasks.Self-supervised Graph Learning (SGL) for Recommendation: This paper proposes a new learning paradigm for recommendation systems that combines self-supervised learning with graph-based methods. The goal is to improve the accuracy and robustness of graph convolution networks (GCNs) for recommendation. The key idea is to supplement the classical supervised task with an auxiliary self-supervised task that reinforces node representation learning through self-discrimination. The authors introduce three operators—node dropout, edge dropout, and random walk—to generate different views of nodes, which are then used in contrastive learning to maximize agreement between views of the same node and minimize agreement between views of different nodes. Theoretical analyses show that SGL has the ability to automatically mine hard negatives, which improves the performance and accelerates training. Empirical studies on three benchmark datasets demonstrate that SGL significantly improves recommendation accuracy, especially for long-tail items, and enhances robustness against interaction noises. The proposed method is implemented on the LightGCN model and is available at https://github.com/wujcan/SGL. The work addresses limitations of existing GCN-based models, including sparse supervision signals, skewed data distributions, and noisy interactions. SGL is model-agnostic and can be applied to any graph-based model with user and/or item embeddings. The results show that SGL outperforms existing methods in terms of recommendation accuracy and robustness. The paper also discusses the theoretical foundations of SGL, including its ability to mine hard negatives and its impact on training efficiency. The authors conclude that SGL is a promising approach for recommendation systems and suggest future work to further integrate SSL with recommendation tasks.