When Attention Closes: How LLMs Lose the Thread in Multi-Turn Interaction
Quick Take
LLMs struggle with multi-turn interactions due to attention loss, leading to distinct failure modes.
Key Points
- Introduces Goal Accessibility Ratio (GAR) for measuring attention.
- Identifies varying failure modes across different architectures.
- Demonstrates significant recall drop when attention channels close.
📖 Reader Mode
~2 min readAbstract:Large language models can follow complex instructions in a single turn, yet over long multi-turn interactions they often lose the thread of instructions, persona, and rules. This degradation has been measured behaviorally but not mechanistically explained. We propose a channel-transition account: goal-defining tokens become less accessible through attention, while goal-related information may persist in residual representations. We introduce the Goal Accessibility Ratio (GAR), measuring attention from generated tokens to task-defining goal tokens, and combine it with sliding-window ablations and residual-stream probes. When attention to instructions closes, what survives reveals architecture. Across architectures, the transition yields qualitatively distinct failure modes: some models preserve goal-conditioned behavior at vanishing attention, others fail despite decodable residual goal information, and the layer at which this encoding emerges varies from 2 to 27. A within-model causal ablation that force-closes the attention channel in Mistral collapses recall from near-perfect to 11% on a 20-fact retention task and raises persona-constraint violations above an adversarial-pressure baseline without user pressure, with both effects emerging at the predictable crossover turn. Linear probes recover per-episode recall outcomes from residual representations with AUC up to 0.99 across all four primary architectures, while input embeddings remain at chance. Across architectures and model scales, the gap between attention loss and residual decodability predicts whether goal-conditioned behavior survives channel closure. We contribute GAR as a diagnostic, the channel-transition framework as a controlled mechanistic account, and a parametric prediction of failure timing under windowed attention closure.
| Subjects: | Artificial Intelligence (cs.AI); Computation and Language (cs.CL) |
| Cite as: | arXiv:2605.12922 [cs.AI] |
| (or arXiv:2605.12922v1 [cs.AI] for this version) | |
| https://doi.org/10.48550/arXiv.2605.12922 arXiv-issued DOI via DataCite (pending registration) |
Submission history
From: Vardhan Dongre [view email]
[v1]
Wed, 13 May 2026 02:58:18 UTC (12,450 KB)
— Originally published at arxiv.org
More from arXiv cs.AI
See more →Invisible Orchestrators Suppress Protective Behavior and Dissociate Power-Holders: Safety Risks in Multi-Agent LLM Systems
Invisible orchestrators in multi-agent LLM systems pose significant safety risks and affect behavior dynamics.