Claude Agents Memory
Claude Managed Agents Get Persistent Memory in Public Beta
Anthropic has launched persistent memory for Claude Managed Agents in public beta, enabling AI agents to learn across sessions. Early adopters like Rakuten report 97% fewer errors and 27% lower costs. Here's how the filesystem‑based memory layer works and what it means for builders.
What Persistent Memory for Claude Agents Does
Anthropic has launched persistent memory for Claude Managed Agents in public beta, giving AI agents the ability to retain and apply knowledge across sessions. Announced by Angela Jiang, Head of Product for the Claude Platform, the feature addresses a fundamental limitation: until now, every new agent session started from zero, with no memory of past decisions, mistakes, or learned context.
As Angela Jiang, Head of Product for the Claude Platform, stated on LinkedIn: "Managed agents can now self learn across sessions with memory — developers can view, edit, add and delete those memories directly in the console or API."
How the Filesystem‑Based Memory Layer Works
Unlike typical key‑value memory stores, Anthropic chose a filesystem‑based approach. Memories mount directly onto a filesystem, so Claude can use the same bash and code execution tools it already knows for agentic tasks. According to SD Times, this design decision means "Claude can rely on the same bash and code execution capabilities that make it effective at agentic tasks."
The memory system includes several developer‑facing features:
- Scoping: Stores can be scoped at organization or user level with different read/write permissions. Enterprise‑wide stores can be set as read‑only while per‑user stores allow read/write.
- Concurrency: Multiple agents can work against the same memory store simultaneously without overwriting each other.
- Audit trail: Every write becomes a session event in the Claude Console, recording which agent and session each memory came from.
- Rollback and redaction: Developers can roll back to earlier versions or redact specific content from the history.
- Console and API control: Full programmatic access to what's stored, shared, and rolled back — plus direct editing in the Claude Console.
The Opus 4.7 Advantage
Anthropic notes that its Opus 4.7 model is specifically optimized for filesystem‑based memory. According to SD Times, Opus 4.7 "remembers important notes across long, multi‑session work and uses them to move on to new tasks that need less up‑front context." The model is more selective about what to retain and saves more thorough, organized memories compared to earlier versions.
This matters for builders because it means the memory layer isn't just a dumb key‑value store — the model itself is trained to write better memories and read them more effectively. The combination of filesystem structure and model‑level optimization is what makes this different from community‑built memory plugins that existed before.
Early Adopter Results: Netflix, Rakuten, Wisedocs
Anthropic brought four companies into the early access program, and the results are striking — especially Rakuten's:
| Company | Use Case | Results |
|---|---|---|
| Netflix | Carrying context across sessions; mid‑conversation human corrections | Replaces need to manually update prompts |
| Rakuten | Long‑running task agents avoiding past mistakes | 97% fewer first‑pass errors, 27% cost reduction, 34% lower latency |
| Wisedocs | Document verification pipeline with cross‑session memory | 30% faster verification |
| Ando | Making sense of messy team‑agent conversations | Eliminated need to build custom memory infrastructure |
Rakuten's numbers are the most compelling. Yusuke Kaji, General Manager of AI for Business at Rakuten, stated via EdTech Innovation Hub: "Memory in Claude Managed Agents lets us put continuous learning into production at scale. Our agents distill lessons from every session, delivering 97% fewer first‑pass errors at 27% lower cost and 34% lower latency."
What It Costs and How to Get Started
Persistent memory runs on top of Claude Managed Agents, which itself costs standard Claude API token rates plus $0.08 per session‑hour. According to a detailed analysis by Sathish Raju on Medium, this session cost is cheap at small scale but becomes significant at enterprise volumes. If you have the engineering capacity to run your own agent infrastructure, the session costs of Managed Agents may exceed the cost of doing it yourself.
To start using persistent memory:
- Deploy via the Claude Console or the new CLI
- Documentation: platform.claude.com/docs/en/managed‑agents/memory
- Blog announcement: claude.com/blog/claude‑managed‑agents‑memory
The memory feature is included in the Managed Agents public beta — no additional access request required beyond the base product.
The Trade‑Offs Builders Should Know
As Raju's analysis points out, there are honest trade‑offs to consider:
- Vendor lock‑in is real: Claude Managed Agents is Claude‑only. No GPT‑5, Gemini, or DeepSeek. If you're building a model‑agnostic architecture, this is a commitment.
- Most powerful features still gated: Multi‑agent coordination and self‑evaluation — two of the most powerful capabilities from the Managed Agents announcement — remain in research preview and require a separate access request.
- All data flows through Anthropic: Every memory write, every session event, every audit log passes through Anthropic's infrastructure. For regulated industries, this is either a feature (auditability) or a concern (data residency).
- Session costs compound: $0.08/session‑hour sounds cheap until you're running thousands of concurrent agents. The memory feature likely increases session duration since agents do more work per session.
- Migration is non‑trivial: Once your agents are building memories in the Claude filesystem, moving to another platform means starting from zero again.
That said, for teams that have been building custom memory infrastructure from scratch — as Ando founder Sara Du stated: "Memory lets us stop building memory infra and focus on the product itself" — the productivity gain is immediate.
The Bigger Picture: Agents That Actually Learn
Persistent memory is the missing piece that makes AI agents genuinely useful for production work. Without it, every agent session is an amnesiac — repeating the same mistakes, relearning the same context, wasting tokens on re‑establishing what was already known. With it, agents can build institutional knowledge the way human employees do: by remembering what worked, what failed, and what preferences the user has expressed.
The competitive landscape is shifting fast. OpenAI's Codex and workspace agents are pushing in the same direction. Google's Project Mariner is building agent infrastructure with its own memory layer. But Anthropic is the first to ship persistent memory as a first‑class, filesystem‑based product with real enterprise adoption numbers behind it.
For builders deciding between agent platforms, the question is no longer just which model is smartest — it's which infrastructure lets agents actually get better over time Right now, the answer is leaning toward Anthropic.
Apr 27, 2026
AI Model Market Splits as OpenAI Doubles Prices and DeepSeek Undercuts
In 24 hours, OpenAI doubled GPT-5.5 pricing while DeepSeek launched V4 at one-ninth the cost. The comfortable middle tier of AI models is vanishing, forcing developers to choose between premium integrated stacks and cheap open-weight alternatives. Here's what the split means for builders.
Apr 27, 2026
OpenAI Offers $25,000 for Biosafety Jailbreaks in GPT-5.5 Bug Bounty
OpenAI launched a Bio Bug Bounty for GPT-5.5, offering $25,000 to researchers who find a universal jailbreak that defeats five biosafety questions. But the invite-only program, strict NDA, and low payout have drawn fierce criticism from the security community.
Apr 26, 2026
Perplexity AI Hit With Privacy Lawsuit While Appeals Court Greenlights Its Bots
Perplexity faces a class-action lawsuit for allegedly sharing user chat data with Google and Meta through hidden trackers. Meanwhile, the Ninth Circuit has paused an injunction blocking its Comet shopping bot from Amazon — setting the stage for a landmark ruling on whether AI agents need user permission or platform permission to operate.