Useful Memories Become Faulty When Continuously Updated by LLMs
Quick Take
LLMs' consolidated memories degrade over time, leading to faulty recall despite initial usefulness.
Key Points
- Consolidated memories can fall below no-memory performance.
- GPT-5.4 fails on 54% of previously solved problems.
- Robust memory should prioritize raw episodes over consolidation.
📖 Reader Mode
~2 min readAbstract:Learning from past experience benefits from two complementary forms of memory: episodic traces -- raw trajectories of what happened -- and consolidated abstractions distilled across many episodes into reusable, schema-like lessons. Recent agentic-memory systems pursue the consolidated form: an LLM rewrites past trajectories into a textual memory bank that it continuously updates with new interactions, promising self-improving agents without parameter updates. Yet we find that such consolidated memories produced by today's LLMs are often faulty even when derived from useful experiences. As consolidation proceeds, memory utility first rises, then degrades, and can fall below the no-memory baseline. More surprisingly, even when consolidating from ground-truth solutions, GPT-5.4 fails on 54% of a set of ARC-AGI problems it had previously solved without memory. We trace the regression to the consolidation step rather than the underlying experience: the same trajectories yield qualitatively different memories under different update schedules, and an episodic-only control that simply retains those trajectories remains competitive with the consolidators we test. In a controlled ARC-AGI Stream environment that exposes Retain, Delete, and Consolidate actions, agents preserve raw episodes by default and double the accuracy of their forced-consolidation counterparts; disabling consolidation entirely (episodic management only) matches this auto regime. Practically, robust agent memory should treat raw episodes as first-class evidence and gate consolidation explicitly rather than firing it after every interaction. Looking forward, reliable agentic memory will require LLMs that can consolidate without overwriting the evidence they depend on.
| Subjects: | Artificial Intelligence (cs.AI) |
| Cite as: | arXiv:2605.12978 [cs.AI] |
| (or arXiv:2605.12978v1 [cs.AI] for this version) | |
| https://doi.org/10.48550/arXiv.2605.12978 arXiv-issued DOI via DataCite (pending registration) |
Submission history
From: Dylan Zhang [view email]
[v1]
Wed, 13 May 2026 04:15:50 UTC (455 KB)
— Originally published at arxiv.org
More from arXiv cs.AI
See more →Invisible Orchestrators Suppress Protective Behavior and Dissociate Power-Holders: Safety Risks in Multi-Agent LLM Systems
Invisible orchestrators in multi-agent LLM systems pose significant safety risks and affect behavior dynamics.