Skip to main content

LLM Construction

Attention Sinks and Retrieval Decay

Why transformers disproportionately attend to initial tokens (attention sinks), how StreamingLLM exploits this for infinite-length inference, and how retrieval accuracy degrades with distance and position within the context window.

AdvancedTier 2FrontierFrontier watch~45 min

Why This Matters

Large language models have context windows of 100K+ tokens, but they do not use all positions equally. Two empirical phenomena constrain what long-context performance actually means:

  1. Attention sinks: models allocate disproportionate attention to the first few tokens regardless of their semantic content.
  2. Retrieval decay: the ability to find and use information degrades based on where that information sits in the context.

These are related but distinct observations. Attention sinks were documented in StreamingLLM on open softmax-decoder families such as Llama-2, MPT, Falcon, and Pythia. Lost-in-the-middle was measured on specific long-context setups including GPT-3.5-Turbo-0613, Claude-1.3, LongChat-13B-16K, MPT-30B-Instruct, and Llama-2 checkpoints in Liu et al. The safe claim is not "all LLMs always behave this way," but that these positional effects are strong enough on evaluated model families to matter for prompt design, retrieval systems, and long-context evaluation.

Mental Model

Picture a person reading a very long document. They remember the beginning well (primacy), remember the end well (recency), and forget things in the middle. Transformers exhibit a similar pattern, but for a different reason: the attention mechanism, not memory decay, creates the bias.

The first token becomes a "sink" for attention mass that has nowhere else to go. Middle positions get less attention than beginning or end positions. The result is a U-shaped retrieval curve across position.

Attention Sinks

Definition

Attention Sink

An attention sink is a token position (typically the first token or the BOS token) that receives a disproportionately large share of attention weight across many heads and layers, regardless of the token's semantic content.

For a sequence of length nn under causal masking, query ii attends only to keys jij \leq i. Let αi,j\alpha_{i,j} denote the attention weight from query position ii to key position jj. The number of non-sink (query, key) pairs with j1j \neq 1 and jij \leq i is i=1n(i1)=n(n1)/2\sum_{i=1}^{n} (i - 1) = n(n-1)/2. Position j=1j = 1 is a sink if:

1ni=1nαi,12n(n1)i=1nj1jiαi,j.\frac{1}{n} \sum_{i=1}^{n} \alpha_{i,1} \gg \frac{2}{n(n-1)} \sum_{i=1}^{n} \sum_{\substack{j \neq 1 \\ j \leq i}} \alpha_{i,j}.

Why does this happen? Softmax attention must produce a valid probability distribution over keys. When a query has no strong match among the keys, the attention mass must go somewhere. The initial tokens, having been seen by every subsequent query during autoregressive training, become a default target for unused attention. The model learns this pattern during pretraining: the first token serves as a no-op attention target visible to every query.

StreamingLLM

Definition

StreamingLLM

StreamingLLM is an inference strategy for processing sequences longer than the training context window. Instead of using a sliding window that evicts the oldest KV entries, StreamingLLM keeps the KV entries for a small number of initial "sink" tokens (typically 1 to 4) plus the most recent ww tokens.

The KV cache at step tt contains entries for positions {1,,k}{tw+1,,t}\{1, \ldots, k\} \cup \{t - w + 1, \ldots, t\} where kk is the number of sink tokens retained.

Standard sliding window KV cache (evict the oldest tokens) causes perplexity to spike when the initial tokens leave the window. StreamingLLM avoids this by always retaining the sink tokens.

Main Results

Proposition

Attention Sink Concentration

Statement

Consider a query qq attending to keys k1,,knk_1, \ldots, k_n under causal masking. If qTkjcq^T k_j \approx c for all jj (no key is strongly preferred), then the attention distribution is approximately uniform. However, during training, the model learns biases such that qTk1q^T k_1 is consistently larger than qTkjq^T k_j for j>1j > 1 when the query lacks a clear semantic match. This concentrates attention on position 1.

Empirically, on evaluated decoder-only families such as Llama-2, MPT, Falcon, and Pythia, the first token receives several times the average attention weight across most layers beyond the input layer (Xiao et al., arXiv 2023 / ICLR 2024).

Intuition

Softmax forces a valid distribution. The model needs a "default" key to attend to when nothing is relevant. The first token is visible to every query (it is never masked), so it becomes the universal default during training. The model learns key/query biases that make the first position attractive.

Proof Sketch

This is primarily an empirical observation. The mechanism can be understood through the softmax temperature: when all logits are similar, small additive biases in qTk1q^T k_1 create large effects on the probability of position 1. The model parameters encode these biases through training.

Why It Matters

Attention sinks explain why removing the first token from the KV cache during streaming inference causes perplexity to blow up. They also suggest that attention weight magnitude does not always indicate semantic relevance: high attention to the first token is structural, not semantic.

Failure Mode

Models trained with explicit "sink tokens" (e.g., a dedicated padding token at position 0) can mitigate this effect. Architectures using linear attention or non-softmax normalization may not exhibit attention sinks because they do not force a probability distribution.

Empirical Finding: StreamingLLM Perplexity Stability

Let PPLfull(t)\text{PPL}_{\text{full}}(t) be the perplexity at position tt using the full KV cache (all prior tokens). Let PPLstream(t)\text{PPL}_{\text{stream}}(t) be the perplexity using only the sink tokens and the most recent ww tokens. Xiao et al. (arXiv 2023 / ICLR 2024) report qualitatively, across Llama-2, MPT, Falcon, and Pythia model families and sequences of up to 4 million tokens, that the streaming perplexity tracks the full-cache perplexity:

logPPLstream(t)logPPLfull(t)(heuristic; no formal bound),\log \text{PPL}_{\text{stream}}(t) \approx \log \text{PPL}_{\text{full}}(t) \qquad \text{(heuristic; no formal bound)},

provided the retained cache contains a small set of initial sink tokens plus a recent window. This is a qualitative observation from the paper's perplexity figures, not a theorem or a universal threshold on kk and ww. Without the sink tokens, perplexity diverges as tt exceeds the window size.

The sink tokens stabilize the attention distribution, and the recent tokens supply local context for prediction. Together they approximate the full cache well enough for language modeling, because most next-token prediction depends on recent context plus parametric knowledge, not on tokens thousands of positions back.

This is an empirical observation, not a theorem: there is no known bound proving that the NLL gap is small for all inputs. It holds across the tested models and corpora, and can be violated on tasks that require retrieval from evicted positions.

What it buys. StreamingLLM enables deploying LLMs on arbitrarily long streams (ongoing conversations, log monitoring) with bounded memory: the KV cache stays at O(k+w)O(k + w) instead of growing linearly.

What it does not buy. StreamingLLM does not use information in evicted tokens. Tasks requiring retrieval from the distant past (for example, "what was the third sentence of this conversation?") will fail. This is stable inference, not genuine long-range understanding.

Lost-in-the-Middle Effect

Liu et al. (2023) demonstrated a consistent pattern: when relevant information is placed at different positions in a long context, retrieval accuracy follows a U-shaped curve. Models retrieve best from the beginning and end of the context, and worst from the middle.

For a context of nn documents where one contains the answer:

  • Placing the answer in position 1: highest accuracy
  • Placing the answer in position nn: second-highest accuracy
  • Placing the answer near position n/2n/2: lowest accuracy

Liu et al. measured this on multi-document QA and key-value retrieval across specific GPT-3.5, Claude-1.3, LongChat, MPT-30B-Instruct, and Llama-2 checkpoints at the time of writing (4B to 70B parameter range, 4K to 32K evaluated context lengths). The U-shaped positional effect was reproduced on those setups, with accuracy drops of roughly 20 to 40 percentage points between optimal and worst positions. Whether the same quantitative profile holds at 128K or longer or on other task families (reasoning, code) is an open empirical question, and newer models with different long-context training (YaRN, RULER-tuned, sparse attention architectures) should be measured case by case.

What Has Actually Been Measured

PhenomenonPrimary sourceEvaluated setupsSafe takeaway
Attention sinks in streaming inferenceXiao et al. (StreamingLLM, 2024)Llama-2, MPT, Falcon, and Pythia families under streaming language-model evaluationRetaining a few initial sink tokens stabilizes streaming language modeling on those families; this is not the same as long-range retrieval.
Lost-in-the-middle retrieval decayLiu et al. (TACL 2024)GPT-3.5-Turbo-0613, Claude-1.3, LongChat-13B-16K, MPT-30B-Instruct, and Llama-2 variants on multi-document QA and key-value retrievalRetrieval accuracy often falls for evidence placed in middle positions on these tasks and models.
Useful long-context sizeLongBench, RULER, BABILongTask suites spanning retrieval, QA, and long-context reasoning on specific released checkpointsAdvertised window length should be checked against benchmarked retrieval and reasoning behavior, not treated as self-validating.

Common Confusions

Watch Out

Attention sinks are not a training bug

Attention sinks emerge from the interaction between softmax normalization and autoregressive masking. They have been measured across several standard softmax-decoder model families. Training longer or on more data is not known to remove them in those architectures. They are best treated as a recurring structural feature of softmax attention with causal masks, not as a one-off training accident.

Watch Out

Long context window does not mean long context usage

A model with a 128K token context window can process 128K tokens. That does not mean it uses all 128K tokens effectively. The lost-in-the-middle effect shows that information in the middle of the context is partially ignored. Effective context length is shorter than the maximum context length.

Watch Out

Attention sinks and lost-in-the-middle are not the same claim

Attention sinks are about where some heads send mass when no later token is a clear match. Lost-in-the-middle is a retrieval-performance result measured by placing relevant evidence at different positions. The first can help explain the second, but they are not interchangeable definitions.

Watch Out

StreamingLLM does not extend context understanding

StreamingLLM enables stable inference on arbitrarily long streams, but it only uses k+wk + w tokens of context at any point. It solves the inference stability problem, not the long-range retrieval problem.

Exercises

ExerciseCore

Problem

A transformer uses a StreamingLLM cache with k=4k = 4 sink tokens and w=512w = 512 recent tokens. How many token positions are cached at any step, regardless of stream length? If the model processes a 1 million token stream, what fraction of positions would a full cache store relative to the StreamingLLM cache?

ExerciseAdvanced

Problem

You are building a retrieval-augmented generation system. You retrieve 20 relevant documents and concatenate them into the context. Based on the lost-in-the-middle effect, how should you order these documents to maximize the probability that the model uses the most relevant ones?

References

Canonical:

  • Xiao et al., Efficient Streaming Language Models with Attention Sinks, ICLR 2024 (arXiv:2309.17453).
  • Liu et al., Lost in the Middle: How Language Models Use Long Contexts, TACL 2024 (arXiv:2307.03172).
  • Han et al., LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models, NAACL 2024 (arXiv:2308.16137).
  • Gu et al., When Attention Sink Emerges in Language Models: An Empirical View (2024, arXiv:2410.10781).

Mechanism / sink theory:

  • Darcet et al., Vision Transformers Need Registers, ICLR 2024 (arXiv:2309.16588). Vision-side companion showing register tokens remove sink artifacts.
  • Sun et al., Massive Activations in Large Language Models (2024, arXiv:2402.17762). Sinks align with a small number of outlier activations that carry mass through residual streams.

Position-extension methods:

  • Press et al., Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation, ICLR 2022 (arXiv:2108.12409).
  • Sun et al., A Length-Extrapolatable Transformer, ACL 2023 (arXiv:2212.10554).
  • Peng et al., YaRN: Efficient Context Window Extension of Large Language Models (2023, arXiv:2309.00071).
  • Ding et al., LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens, ICML 2024 (arXiv:2402.13753).

Long-context evaluation:

  • Bai et al., LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding (2023, arXiv:2308.14508).
  • Hsieh et al., RULER: What's the Real Context Size of Your Long-Context Language Models? (2024, arXiv:2404.06654).
  • Kuratov et al., BABILong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack (2024, arXiv:2406.10149).

Next Topics

  • Context engineering: practical strategies for structuring prompts given these attention biases

Last reviewed: April 23, 2026

Canonical graph

Required before and derived from this topic

These links come from prerequisite edges in the curriculum graph. Editorial suggestions are shown here only when the target page also cites this page as a prerequisite.

Required prerequisites

2

Derived topics

1

Graph-backed continuations