Collaborative Sequential Recommendations via Multi-view GNN-transformers

Collaborative Sequential Recommendations via Multi-view GNN-transformers

June 2024 | TIANZE LUO, YONG LIU, SINNO JIALIN PAN
The paper "Collaborative Sequential Recommendations via Multi-view GNN-transformers" by Tianze Luo, Yong Liu, and Sinno Jialin Pan addresses the limitations of existing sequential recommendation methods, which often rely solely on the chronological relationships within individual user behavior sequences. The authors propose a novel framework that integrates both the context information within each user's behavior sequence and collaborative information among different users' behavior sequences through a local dependency graph for each item. This framework leverages Graph Neural Networks (GNNs) and Transformers to capture higher-order item dependency information, enhancing the robustness and accuracy of recommendations. Key contributions of the paper include: 1. **Hierarchical Graph Aggregation**: A method to efficiently aggregate representations of sub-graphs from the item dependency graph, reducing computational complexity. 2. **Multi-view Architecture**: The proposed model forms multiple views of each item's neighborhood, capturing both sequential and collaborative information. 3. **Dirichlet Sampling**: A technique to select important neighbors for each hop, improving efficiency and reducing overfitting. 4. **Loss Functions**: The model uses a combination of main loss, individual view losses, and a contrastive loss to optimize the representations and ensure consistency across views. The experimental results on five benchmark datasets demonstrate that the proposed model outperforms existing methods in terms of recommendation accuracy, particularly in handling user behavior fluctuations and leveraging collaborative information. The paper also discusses the time complexity of the proposed model, showing its efficiency compared to state-of-the-art GNN-based models.The paper "Collaborative Sequential Recommendations via Multi-view GNN-transformers" by Tianze Luo, Yong Liu, and Sinno Jialin Pan addresses the limitations of existing sequential recommendation methods, which often rely solely on the chronological relationships within individual user behavior sequences. The authors propose a novel framework that integrates both the context information within each user's behavior sequence and collaborative information among different users' behavior sequences through a local dependency graph for each item. This framework leverages Graph Neural Networks (GNNs) and Transformers to capture higher-order item dependency information, enhancing the robustness and accuracy of recommendations. Key contributions of the paper include: 1. **Hierarchical Graph Aggregation**: A method to efficiently aggregate representations of sub-graphs from the item dependency graph, reducing computational complexity. 2. **Multi-view Architecture**: The proposed model forms multiple views of each item's neighborhood, capturing both sequential and collaborative information. 3. **Dirichlet Sampling**: A technique to select important neighbors for each hop, improving efficiency and reducing overfitting. 4. **Loss Functions**: The model uses a combination of main loss, individual view losses, and a contrastive loss to optimize the representations and ensure consistency across views. The experimental results on five benchmark datasets demonstrate that the proposed model outperforms existing methods in terms of recommendation accuracy, particularly in handling user behavior fluctuations and leveraging collaborative information. The paper also discusses the time complexity of the proposed model, showing its efficiency compared to state-of-the-art GNN-based models.
Reach us at info@study.space
Understanding Collaborative Sequential Recommendations via Multi-View GNN-Transformers