RoFormer: Enhanced Transformer with Rotary Position Embedding

RoFormer: Enhanced Transformer with Rotary Position Embedding

November 9, 2023 | Jianlin Su, Yu Lu, Shengfeng Pan, Ahmed Murtadha, Bo Wen, Yunfeng Liu
The paper introduces Rotary Position Embedding (RoPE), a novel method to enhance the positional information in transformer-based language models. RoPE encodes absolute positions using a rotation matrix and incorporates explicit relative position dependencies in the self-attention formulation. This approach offers several advantages, including flexibility in sequence length, decay of inter-token dependencies with increasing relative distances, and the ability to equip linear self-attention with relative position encoding. The proposed method, named RoFormer, is evaluated on various long text classification benchmark datasets, demonstrating superior performance compared to baseline alternatives. The paper also provides theoretical analysis to explain some experimental results and discusses the integration of RoPE into the Huggingface library.The paper introduces Rotary Position Embedding (RoPE), a novel method to enhance the positional information in transformer-based language models. RoPE encodes absolute positions using a rotation matrix and incorporates explicit relative position dependencies in the self-attention formulation. This approach offers several advantages, including flexibility in sequence length, decay of inter-token dependencies with increasing relative distances, and the ability to equip linear self-attention with relative position encoding. The proposed method, named RoFormer, is evaluated on various long text classification benchmark datasets, demonstrating superior performance compared to baseline alternatives. The paper also provides theoretical analysis to explain some experimental results and discusses the integration of RoPE into the Huggingface library.
Reach us at info@study.space
Understanding RoFormer%3A Enhanced Transformer with Rotary Position Embedding