July 14–18, 2024 | Yuxi Liu, Lianghao Xia, Chao Huang
The paper "SelfGNN: Self-Supervised Graph Neural Networks for Sequential Recommendation" addresses the challenges of sequential recommendation by proposing a novel framework called Self-Supervised Graph Neural Network (SelfGNN). The main contributions of this work are:
1. **Framework Overview**: SelfGNN integrates short-term collaborative graph encoding and multi-level long-term sequential learning to capture both dynamic and static user interests.
2. **Short-term Collaborative Graph Encoding**: The model divides the global user-item graph into multiple short-term graphs based on time intervals, using Graph Convolutional Networks (GCNs) to propagate collaborative information.
3. **Multi-level Long-term Sequential Learning**: It models long-term user and item representations at different granularities, including interval-level and instance-level sequence modeling.
4. **Personalized Self-Augmented Learning**: A self-augmented learning task is designed to correct short-term interactions based on long-term user interests, enhancing robustness and adaptability.
The paper also includes a detailed analysis of the model's theoretical and computational complexities, as well as extensive experiments on four real-world datasets (Amazon-book, Gowalla, Movielens, and Yelp) to demonstrate the effectiveness of SelfGNN in handling data noise and sparsity. The results show that SelfGNN outperforms various state-of-the-art baselines, particularly in scenarios with high noise levels and sparse data. The paper concludes by discussing future directions, including adaptive dynamic short-term graph partitioning techniques.The paper "SelfGNN: Self-Supervised Graph Neural Networks for Sequential Recommendation" addresses the challenges of sequential recommendation by proposing a novel framework called Self-Supervised Graph Neural Network (SelfGNN). The main contributions of this work are:
1. **Framework Overview**: SelfGNN integrates short-term collaborative graph encoding and multi-level long-term sequential learning to capture both dynamic and static user interests.
2. **Short-term Collaborative Graph Encoding**: The model divides the global user-item graph into multiple short-term graphs based on time intervals, using Graph Convolutional Networks (GCNs) to propagate collaborative information.
3. **Multi-level Long-term Sequential Learning**: It models long-term user and item representations at different granularities, including interval-level and instance-level sequence modeling.
4. **Personalized Self-Augmented Learning**: A self-augmented learning task is designed to correct short-term interactions based on long-term user interests, enhancing robustness and adaptability.
The paper also includes a detailed analysis of the model's theoretical and computational complexities, as well as extensive experiments on four real-world datasets (Amazon-book, Gowalla, Movielens, and Yelp) to demonstrate the effectiveness of SelfGNN in handling data noise and sparsity. The results show that SelfGNN outperforms various state-of-the-art baselines, particularly in scenarios with high noise levels and sparse data. The paper concludes by discussing future directions, including adaptive dynamic short-term graph partitioning techniques.