🧠 Memoria — Multi-Layer Persistent Memory for OpenClaw
The most complete memory system for OpenClaw. 20 layers of memory that work together, powered by YOUR choice of LLM.
Why Memoria?
🏗️ 20 Memory Layers (not just a fact store)
- Facts — Durable knowledge extracted from every conversation
- Procedures — HOW to do things, improves with repetition, learns from failures
- Knowledge Graph — Entities + relations connecting your facts
- Topics & Expertise — Tracks what you talk about most, specializes over time
- Observations — Short-term working memory for active context
- Error Detection 🔥 — Touch fire once, remember forever. Dangers captured on first occurrence
- Lifecycle — Fresh → Settled → Dormant. Nothing deleted, priority shifts naturally
🔌 Bring Your Own LLM
Configure each layer independently. Mix and match:
- Ollama — Run gemma3, qwen3.5, llama, or any model locally (recommended)
- LM Studio — Use any GGUF model from your local server
- Remote APIs — OpenAI, Anthropic, OpenRouter as primary or fallback
- Fallback chains — Ollama → LM Studio → API. If one fails, the next takes over automatically
🏠 100% Local-First
- SQLite + FTS5 — No external database needed
- Local embeddings — nomic-embed-text via Ollama (zero API cost)
- Zero cloud dependency — Works offline, your data stays on your machine
- Fallback chain — Degrades gracefully if a provider goes down
🧬 What Makes Memoria Different
| Feature | Memoria | Basic memory plugins |
|---|---|---|
| Memory layers | 20 specialized layers | Single fact store |
| LLM choice | Any local or remote model | Usually hardcoded |
| Per-layer LLM config | ✅ Different model per layer | ❌ |
| Procedural learning | ✅ Learns HOW, not just WHAT | ❌ |
| Error detection | ✅ Auto-captures dangers | ❌ |
| Knowledge graph | ✅ Entities + relations | ❌ |
| Lifecycle management | ✅ Smart aging, never forgets | ❌ or simple TTL |
| Cost | $0 with local models | Varies |
Installation
As Plugin (recommended — one command)
openclaw plugins install clawhub:memoria-plugin
This installs Memoria from the ClawHub registry. No manual steps needed.
From source (for contributors / advanced users)
If you prefer to inspect the code first:
- Browse the repository: github.com/Primo-Studio/openclaw-memoria
- Review the source code, especially
index.ts(main entrypoint) andopenclaw.plugin.json(config schema) - Clone and install:
cd ~/.openclaw/extensions
git clone https://github.com/Primo-Studio/openclaw-memoria.git memoria
cd memoria && npm install
Then add to your openclaw.json under plugins.entries:
{
"memoria": { "enabled": true },
"memory-convex": { "enabled": false }
}
Configuration
Minimal (works out of the box with Ollama)
Just install and restart. Defaults: Ollama + gemma3:4b for extraction, nomic for embeddings.
Custom LLM per layer
"memoria": {
"enabled": true,
"config": {
"llm": {
"default": { "provider": "ollama", "model": "qwen3.5:4b" },
"procedural": { "provider": "lmstudio", "model": "your-model" },
"graph": { "provider": "openai", "model": "gpt-4o-mini" }
}
}
}
Source Code
The full source is available on GitHub: Primo-Studio/openclaw-memoria
Key files:
index.ts— Main plugin entrypoint (hooks, extraction, recall pipeline)procedural.ts— Procedural memory (how-to learning)lifecycle.ts— Lifecycle management (fresh/settled/dormant)scoring.ts— Temporal scoring and relevance rankingselective.ts— Dedup, contradiction detection, fact qualityopenclaw.plugin.json— Configuration schema
Feedback & Community
We'd love your feedback! Tell us how Memoria works for you:
- 🐦 Tweet us @Nitix_ — share your setup, results, or ideas
- ⭐ Star the repo: github.com/Primo-Studio/openclaw-memoria
- 🐛 Issues: GitHub Issues
Built with ❤️ by Primo Studio 🇬🇫 — AI tooling from French Guiana.