LLM Construction
Attention Sinks and Retrieval Decay
Why transformers disproportionately attend to initial tokens (attention sinks), how StreamingLLM exploits this for infinite-length inference, and how retrieval accuracy degrades with distance and position within the context window.
Prerequisites
Why This Matters
Large language models have context windows of 100K+ tokens, but they do not use all positions equally. Two empirical phenomena constrain what long-context performance actually means:
- Attention sinks: models allocate disproportionate attention to the first few tokens regardless of their semantic content.
- Retrieval decay: the ability to find and use information degrades based on where that information sits in the context.
These are related but distinct observations. Attention sinks were documented in StreamingLLM on open softmax-decoder families such as Llama-2, MPT, Falcon, and Pythia. Lost-in-the-middle was measured on specific long-context setups including GPT-3.5-Turbo-0613, Claude-1.3, LongChat-13B-16K, MPT-30B-Instruct, and Llama-2 checkpoints in Liu et al. The safe claim is not "all LLMs always behave this way," but that these positional effects are strong enough on evaluated model families to matter for prompt design, retrieval systems, and long-context evaluation.
Mental Model
Picture a person reading a very long document. They remember the beginning well (primacy), remember the end well (recency), and forget things in the middle. Transformers exhibit a similar pattern, but for a different reason: the attention mechanism, not memory decay, creates the bias.
The first token becomes a "sink" for attention mass that has nowhere else to go. Middle positions get less attention than beginning or end positions. The result is a U-shaped retrieval curve across position.
Attention Sinks
Attention Sink
An attention sink is a token position (typically the first token or the BOS token) that receives a disproportionately large share of attention weight across many heads and layers, regardless of the token's semantic content.
For a sequence of length under causal masking, query attends only to keys . Let denote the attention weight from query position to key position . The number of non-sink (query, key) pairs with and is . Position is a sink if:
Why does this happen? Softmax attention must produce a valid probability distribution over keys. When a query has no strong match among the keys, the attention mass must go somewhere. The initial tokens, having been seen by every subsequent query during autoregressive training, become a default target for unused attention. The model learns this pattern during pretraining: the first token serves as a no-op attention target visible to every query.
StreamingLLM
StreamingLLM
StreamingLLM is an inference strategy for processing sequences longer than the training context window. Instead of using a sliding window that evicts the oldest KV entries, StreamingLLM keeps the KV entries for a small number of initial "sink" tokens (typically 1 to 4) plus the most recent tokens.
The KV cache at step contains entries for positions where is the number of sink tokens retained.
Standard sliding window KV cache (evict the oldest tokens) causes perplexity to spike when the initial tokens leave the window. StreamingLLM avoids this by always retaining the sink tokens.
Main Results
Attention Sink Concentration
Statement
Consider a query attending to keys under causal masking. If for all (no key is strongly preferred), then the attention distribution is approximately uniform. However, during training, the model learns biases such that is consistently larger than for when the query lacks a clear semantic match. This concentrates attention on position 1.
Empirically, on evaluated decoder-only families such as Llama-2, MPT, Falcon, and Pythia, the first token receives several times the average attention weight across most layers beyond the input layer (Xiao et al., arXiv 2023 / ICLR 2024).
Intuition
Softmax forces a valid distribution. The model needs a "default" key to attend to when nothing is relevant. The first token is visible to every query (it is never masked), so it becomes the universal default during training. The model learns key/query biases that make the first position attractive.
Proof Sketch
This is primarily an empirical observation. The mechanism can be understood through the softmax temperature: when all logits are similar, small additive biases in create large effects on the probability of position 1. The model parameters encode these biases through training.
Why It Matters
Attention sinks explain why removing the first token from the KV cache during streaming inference causes perplexity to blow up. They also suggest that attention weight magnitude does not always indicate semantic relevance: high attention to the first token is structural, not semantic.
Failure Mode
Models trained with explicit "sink tokens" (e.g., a dedicated padding token at position 0) can mitigate this effect. Architectures using linear attention or non-softmax normalization may not exhibit attention sinks because they do not force a probability distribution.
Empirical Finding: StreamingLLM Perplexity Stability
Let be the perplexity at position using the full KV cache (all prior tokens). Let be the perplexity using only the sink tokens and the most recent tokens. Xiao et al. (arXiv 2023 / ICLR 2024) report qualitatively, across Llama-2, MPT, Falcon, and Pythia model families and sequences of up to 4 million tokens, that the streaming perplexity tracks the full-cache perplexity:
provided the retained cache contains a small set of initial sink tokens plus a recent window. This is a qualitative observation from the paper's perplexity figures, not a theorem or a universal threshold on and . Without the sink tokens, perplexity diverges as exceeds the window size.
The sink tokens stabilize the attention distribution, and the recent tokens supply local context for prediction. Together they approximate the full cache well enough for language modeling, because most next-token prediction depends on recent context plus parametric knowledge, not on tokens thousands of positions back.
This is an empirical observation, not a theorem: there is no known bound proving that the NLL gap is small for all inputs. It holds across the tested models and corpora, and can be violated on tasks that require retrieval from evicted positions.
What it buys. StreamingLLM enables deploying LLMs on arbitrarily long streams (ongoing conversations, log monitoring) with bounded memory: the KV cache stays at instead of growing linearly.
What it does not buy. StreamingLLM does not use information in evicted tokens. Tasks requiring retrieval from the distant past (for example, "what was the third sentence of this conversation?") will fail. This is stable inference, not genuine long-range understanding.
Lost-in-the-Middle Effect
Liu et al. (2023) demonstrated a consistent pattern: when relevant information is placed at different positions in a long context, retrieval accuracy follows a U-shaped curve. Models retrieve best from the beginning and end of the context, and worst from the middle.
For a context of documents where one contains the answer:
- Placing the answer in position 1: highest accuracy
- Placing the answer in position : second-highest accuracy
- Placing the answer near position : lowest accuracy
Liu et al. measured this on multi-document QA and key-value retrieval across specific GPT-3.5, Claude-1.3, LongChat, MPT-30B-Instruct, and Llama-2 checkpoints at the time of writing (4B to 70B parameter range, 4K to 32K evaluated context lengths). The U-shaped positional effect was reproduced on those setups, with accuracy drops of roughly 20 to 40 percentage points between optimal and worst positions. Whether the same quantitative profile holds at 128K or longer or on other task families (reasoning, code) is an open empirical question, and newer models with different long-context training (YaRN, RULER-tuned, sparse attention architectures) should be measured case by case.
What Has Actually Been Measured
| Phenomenon | Primary source | Evaluated setups | Safe takeaway |
|---|---|---|---|
| Attention sinks in streaming inference | Xiao et al. (StreamingLLM, 2024) | Llama-2, MPT, Falcon, and Pythia families under streaming language-model evaluation | Retaining a few initial sink tokens stabilizes streaming language modeling on those families; this is not the same as long-range retrieval. |
| Lost-in-the-middle retrieval decay | Liu et al. (TACL 2024) | GPT-3.5-Turbo-0613, Claude-1.3, LongChat-13B-16K, MPT-30B-Instruct, and Llama-2 variants on multi-document QA and key-value retrieval | Retrieval accuracy often falls for evidence placed in middle positions on these tasks and models. |
| Useful long-context size | LongBench, RULER, BABILong | Task suites spanning retrieval, QA, and long-context reasoning on specific released checkpoints | Advertised window length should be checked against benchmarked retrieval and reasoning behavior, not treated as self-validating. |
Common Confusions
Attention sinks are not a training bug
Attention sinks emerge from the interaction between softmax normalization and autoregressive masking. They have been measured across several standard softmax-decoder model families. Training longer or on more data is not known to remove them in those architectures. They are best treated as a recurring structural feature of softmax attention with causal masks, not as a one-off training accident.
Long context window does not mean long context usage
A model with a 128K token context window can process 128K tokens. That does not mean it uses all 128K tokens effectively. The lost-in-the-middle effect shows that information in the middle of the context is partially ignored. Effective context length is shorter than the maximum context length.
Attention sinks and lost-in-the-middle are not the same claim
Attention sinks are about where some heads send mass when no later token is a clear match. Lost-in-the-middle is a retrieval-performance result measured by placing relevant evidence at different positions. The first can help explain the second, but they are not interchangeable definitions.
StreamingLLM does not extend context understanding
StreamingLLM enables stable inference on arbitrarily long streams, but it only uses tokens of context at any point. It solves the inference stability problem, not the long-range retrieval problem.
Exercises
Problem
A transformer uses a StreamingLLM cache with sink tokens and recent tokens. How many token positions are cached at any step, regardless of stream length? If the model processes a 1 million token stream, what fraction of positions would a full cache store relative to the StreamingLLM cache?
Problem
You are building a retrieval-augmented generation system. You retrieve 20 relevant documents and concatenate them into the context. Based on the lost-in-the-middle effect, how should you order these documents to maximize the probability that the model uses the most relevant ones?
References
Canonical:
- Xiao et al., Efficient Streaming Language Models with Attention Sinks, ICLR 2024 (arXiv:2309.17453).
- Liu et al., Lost in the Middle: How Language Models Use Long Contexts, TACL 2024 (arXiv:2307.03172).
- Han et al., LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models, NAACL 2024 (arXiv:2308.16137).
- Gu et al., When Attention Sink Emerges in Language Models: An Empirical View (2024, arXiv:2410.10781).
Mechanism / sink theory:
- Darcet et al., Vision Transformers Need Registers, ICLR 2024 (arXiv:2309.16588). Vision-side companion showing register tokens remove sink artifacts.
- Sun et al., Massive Activations in Large Language Models (2024, arXiv:2402.17762). Sinks align with a small number of outlier activations that carry mass through residual streams.
Position-extension methods:
- Press et al., Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation, ICLR 2022 (arXiv:2108.12409).
- Sun et al., A Length-Extrapolatable Transformer, ACL 2023 (arXiv:2212.10554).
- Peng et al., YaRN: Efficient Context Window Extension of Large Language Models (2023, arXiv:2309.00071).
- Ding et al., LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens, ICML 2024 (arXiv:2402.13753).
Long-context evaluation:
- Bai et al., LongBench: A Bilingual, Multitask Benchmark for Long Context Understanding (2023, arXiv:2308.14508).
- Hsieh et al., RULER: What's the Real Context Size of Your Long-Context Language Models? (2024, arXiv:2404.06654).
- Kuratov et al., BABILong: Testing the Limits of LLMs with Long Context Reasoning-in-a-Haystack (2024, arXiv:2406.10149).
Next Topics
- Context engineering: practical strategies for structuring prompts given these attention biases
Last reviewed: April 23, 2026
Canonical graph
Required before and derived from this topic
These links come from prerequisite edges in the curriculum graph. Editorial suggestions are shown here only when the target page also cites this page as a prerequisite.
Required prerequisites
2- Attention Mechanism Theorylayer 4 · tier 2
- Forgetting Transformer (FoX)layer 4 · tier 2
Derived topics
1- Context Engineeringlayer 5 · tier 2
Graph-backed continuations