Self-Attentive Sequential Recommendation

Self-Attentive Sequential Recommendation

20 Aug 2018 | Wang-Cheng Kang, Julian McAuley
The paper introduces SASRec, a self-attention based sequential recommendation model designed to capture long-term semantics while making predictions based on relatively few actions. The model aims to balance the strengths of Markov Chains (MCs) and Recurrent Neural Networks (RNNs) by using an attention mechanism to identify relevant items from a user's action history. SASRec is evaluated on both sparse and dense datasets, showing superior performance compared to various state-of-the-art models, including MC/CNN/RNN-based approaches. The model is also significantly more efficient than comparable CNN/RNN-based models, with a training speed that is an order of magnitude faster. Visualizations of attention weights demonstrate how SASRec adaptively handles datasets with varying density and uncovers meaningful patterns in activity sequences. The paper discusses the architecture, training process, and experimental results, highlighting the model's effectiveness and scalability.The paper introduces SASRec, a self-attention based sequential recommendation model designed to capture long-term semantics while making predictions based on relatively few actions. The model aims to balance the strengths of Markov Chains (MCs) and Recurrent Neural Networks (RNNs) by using an attention mechanism to identify relevant items from a user's action history. SASRec is evaluated on both sparse and dense datasets, showing superior performance compared to various state-of-the-art models, including MC/CNN/RNN-based approaches. The model is also significantly more efficient than comparable CNN/RNN-based models, with a training speed that is an order of magnitude faster. Visualizations of attention weights demonstrate how SASRec adaptively handles datasets with varying density and uncovers meaningful patterns in activity sequences. The paper discusses the architecture, training process, and experimental results, highlighting the model's effectiveness and scalability.
Reach us at info@study.space