July 14-18, 2024 | Sirui Chen, Jiawei Chen, Sheng Zhou, Bohao Wang, Shen Han, Chanfei Su, Yuqing Yuan, Can Wang
SIGformer: A Sign-Aware Graph Transformer for Recommendation
SIGformer is a novel method that employs the transformer architecture to leverage sign-aware graph-based recommendation. It integrates both positive and negative feedback to form a signed graph, enabling a more comprehensive understanding of user preferences. Existing methods face two main limitations: 1) they process positive and negative feedback separately, failing to fully utilize the collaborative information in the signed graph; 2) they rely on MLPs or GNNs for information extraction from negative feedback, which may not be effective.
To address these limitations, SIGformer introduces two innovative positional encodings that capture the spectral properties and path patterns of the signed graph. These encodings enable the full exploitation of the entire graph. Extensive experiments across five real-world datasets demonstrate the superiority of SIGformer over state-of-the-art methods. The code is available at https://github.com/StupidThree/SIGformer.
The key contributions of SIGformer include: 1) highlighting the importance of integrating negative feedback in graph-based recommendation and advocating for the application of transformer architecture for sign-aware graph-based recommendation; 2) proposing two innovative sign-aware positional encodings derived from the perspectives of signed graph spectrum and paths, which fully exploit the sign-aware collaborative information; 3) proposing SIGformer and conducting extensive experiments to validate its superiority over state-of-the-art methods.
SIGformer employs a transformer architecture for sign-aware recommendation, replacing GNNs with transformer. It includes an embedding module, a sign-aware transformer module, and a prediction module. The sign-aware transformer module uses a stack of multi-layer transformers to capture collaborative information. The model uses two positional encodings: sign-aware spectral encoding (SSE) and sign-aware path encoding (SPE). SSE captures the spectral properties of the signed graph, while SPE captures the path patterns. The model's effectiveness is validated through empirical experiments on five real-world datasets, where it significantly outperforms existing graph-based methods. Additional ablation studies further confirm the critical role of incorporating negative feedback and the efficacy of the specifically designed encodings.SIGformer: A Sign-Aware Graph Transformer for Recommendation
SIGformer is a novel method that employs the transformer architecture to leverage sign-aware graph-based recommendation. It integrates both positive and negative feedback to form a signed graph, enabling a more comprehensive understanding of user preferences. Existing methods face two main limitations: 1) they process positive and negative feedback separately, failing to fully utilize the collaborative information in the signed graph; 2) they rely on MLPs or GNNs for information extraction from negative feedback, which may not be effective.
To address these limitations, SIGformer introduces two innovative positional encodings that capture the spectral properties and path patterns of the signed graph. These encodings enable the full exploitation of the entire graph. Extensive experiments across five real-world datasets demonstrate the superiority of SIGformer over state-of-the-art methods. The code is available at https://github.com/StupidThree/SIGformer.
The key contributions of SIGformer include: 1) highlighting the importance of integrating negative feedback in graph-based recommendation and advocating for the application of transformer architecture for sign-aware graph-based recommendation; 2) proposing two innovative sign-aware positional encodings derived from the perspectives of signed graph spectrum and paths, which fully exploit the sign-aware collaborative information; 3) proposing SIGformer and conducting extensive experiments to validate its superiority over state-of-the-art methods.
SIGformer employs a transformer architecture for sign-aware recommendation, replacing GNNs with transformer. It includes an embedding module, a sign-aware transformer module, and a prediction module. The sign-aware transformer module uses a stack of multi-layer transformers to capture collaborative information. The model uses two positional encodings: sign-aware spectral encoding (SSE) and sign-aware path encoding (SPE). SSE captures the spectral properties of the signed graph, while SPE captures the path patterns. The model's effectiveness is validated through empirical experiments on five real-world datasets, where it significantly outperforms existing graph-based methods. Additional ablation studies further confirm the critical role of incorporating negative feedback and the efficacy of the specifically designed encodings.