9 May 2024 | Yutao Sun, Li Dong, Yi Zhu, Shaohan Huang, Wenhui Wang, Shuming Ma, Quanlu Zhang, Jianyong Wang, Furu Wei
YOCO is a decoder-decoder architecture for large language models (LLMs) that only caches key-value (KV) pairs once. It consists of two components: a self-decoder and a cross-decoder. The self-decoder efficiently encodes global KV caches that are reused by the cross-decoder via cross-attention. The overall model behaves like a decoder-only Transformer, although YOCO only caches once. This design significantly reduces GPU memory demands while retaining global attention capability. Additionally, the computation flow enables prefilling to early exit without changing the final output, thereby significantly speeding up the prefill stage. Experimental results show that YOCO achieves favorable performance compared to Transformer in various settings of scaling up model size and number of training tokens. YOCO is also extended to 1M context length with near-perfect needle retrieval accuracy. Profiling results show that YOCO improves inference memory, prefill latency, and throughput by orders of magnitude across context lengths and model sizes. Code is available at https://aka.ms/YOCO.
The proposed architecture, named YOCO, is designed for autoregressive modeling, such as large language models (LLMs). As shown in Figure 2, the decoder-decoder architecture has two parts, i.e., self-decoder and cross-decoder. Specifically, YOCO is stacked with L blocks, where the first L/2 layers are self-decoder while the rest modules are cross-decoder. Given an input sequence x = x₁⋯x|ₓ|, the input embeddings are packed into X⁰ = [x₁, ⋯, x|ₓ|] ∈ R^|ₓ|×d_model, where d_model is hidden dimension. We first obtain contextualized vector representations X^l = Self-Decoder(X^{l-1}), l ∈ [1, L/2], where X^{L/2} is used to produce KV caches K̂, V̂ for cross-decoder. Then we compute X^l = Cross-Decoder(X^{l-1}, K̂, V̂), l ∈ [L/2 + 1, L] to get the output vectors X^L.
Both self- and cross-decoder follow a similar block layout (i.e., interleaved attention and feed-forward network) as in Transformer. We also include pre-RMSNorm, SwiGLU, and grouped-query attention as improvements. The difference between the two parts lies in attention modules. Self-decoder uses efficient self-attention (e.g., sliding-window attention). In comparison, cross-decoder uses global cross-attention to attend to the shared KV caches produced by the output of the self-decoder.
The key property of the efficient self-attention module is O(1) inference memory, i.e., constant number of KV caches. For example, the cache size of sliding-window attention depends on the window size insteadYOCO is a decoder-decoder architecture for large language models (LLMs) that only caches key-value (KV) pairs once. It consists of two components: a self-decoder and a cross-decoder. The self-decoder efficiently encodes global KV caches that are reused by the cross-decoder via cross-attention. The overall model behaves like a decoder-only Transformer, although YOCO only caches once. This design significantly reduces GPU memory demands while retaining global attention capability. Additionally, the computation flow enables prefilling to early exit without changing the final output, thereby significantly speeding up the prefill stage. Experimental results show that YOCO achieves favorable performance compared to Transformer in various settings of scaling up model size and number of training tokens. YOCO is also extended to 1M context length with near-perfect needle retrieval accuracy. Profiling results show that YOCO improves inference memory, prefill latency, and throughput by orders of magnitude across context lengths and model sizes. Code is available at https://aka.ms/YOCO.
The proposed architecture, named YOCO, is designed for autoregressive modeling, such as large language models (LLMs). As shown in Figure 2, the decoder-decoder architecture has two parts, i.e., self-decoder and cross-decoder. Specifically, YOCO is stacked with L blocks, where the first L/2 layers are self-decoder while the rest modules are cross-decoder. Given an input sequence x = x₁⋯x|ₓ|, the input embeddings are packed into X⁰ = [x₁, ⋯, x|ₓ|] ∈ R^|ₓ|×d_model, where d_model is hidden dimension. We first obtain contextualized vector representations X^l = Self-Decoder(X^{l-1}), l ∈ [1, L/2], where X^{L/2} is used to produce KV caches K̂, V̂ for cross-decoder. Then we compute X^l = Cross-Decoder(X^{l-1}, K̂, V̂), l ∈ [L/2 + 1, L] to get the output vectors X^L.
Both self- and cross-decoder follow a similar block layout (i.e., interleaved attention and feed-forward network) as in Transformer. We also include pre-RMSNorm, SwiGLU, and grouped-query attention as improvements. The difference between the two parts lies in attention modules. Self-decoder uses efficient self-attention (e.g., sliding-window attention). In comparison, cross-decoder uses global cross-attention to attend to the shared KV caches produced by the output of the self-decoder.
The key property of the efficient self-attention module is O(1) inference memory, i.e., constant number of KV caches. For example, the cache size of sliding-window attention depends on the window size instead