Self-supervised Graph Learning for Recommendation

Self-supervised Graph Learning for Recommendation

2021 | Jiancan Wu1, Xiang Wang2*, Fuli Feng2, Xiangnan He1, Liang Chen3, Jianxun Lian4, and Xing Xie4
This paper addresses the limitations of graph-based recommendation models, particularly the bias towards high-degree nodes and vulnerability to noisy interactions. To improve the accuracy and robustness of these models, the authors propose Self-supervised Graph Learning (SGL), a novel framework that integrates self-supervised learning with graph convolution networks (GCNs). SGL enhances the representation learning by generating multiple views of nodes through data augmentation techniques such as node dropout, edge dropout, and random walks. These techniques change the graph structure in different ways, allowing for the creation of diverse views of nodes. The augmented views are then used to perform contrastive learning, where the model learns to maximize agreement between views of the same node and minimize agreement between views of different nodes. The authors demonstrate that SGL effectively mitigates the bias towards high-degree nodes and improves the robustness against noisy interactions. Extensive experiments on three benchmark datasets show that SGL significantly enhances recommendation accuracy, especially for long-tail items, and accelerates training convergence. The proposed method is model-agnostic and can be applied to any graph-based recommendation model, making it a valuable contribution to the field of collaborative filtering and graph neural networks.This paper addresses the limitations of graph-based recommendation models, particularly the bias towards high-degree nodes and vulnerability to noisy interactions. To improve the accuracy and robustness of these models, the authors propose Self-supervised Graph Learning (SGL), a novel framework that integrates self-supervised learning with graph convolution networks (GCNs). SGL enhances the representation learning by generating multiple views of nodes through data augmentation techniques such as node dropout, edge dropout, and random walks. These techniques change the graph structure in different ways, allowing for the creation of diverse views of nodes. The augmented views are then used to perform contrastive learning, where the model learns to maximize agreement between views of the same node and minimize agreement between views of different nodes. The authors demonstrate that SGL effectively mitigates the bias towards high-degree nodes and improves the robustness against noisy interactions. Extensive experiments on three benchmark datasets show that SGL significantly enhances recommendation accuracy, especially for long-tail items, and accelerates training convergence. The proposed method is model-agnostic and can be applied to any graph-based recommendation model, making it a valuable contribution to the field of collaborative filtering and graph neural networks.
Reach us at info@study.space
[slides and audio] Self-supervised Graph Learning for Recommendation