Memoria for OpenClaw

The most advanced memory system for AI agents. 24 cognitive layers, knowledge graph, procedural learning, dialectic queries, AI self-observation, auto skill creation, crash-safe WAL, async prefetch. Works with Claude, Cursor, Copilot, ChatGPT & any OpenClaw agent. 100% local-first (SQLite + Ollama), zero cloud cost, zero API keys required.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "Memoria for OpenClaw" with this command: npx skills add nieto42/openclaw-memoria

🧠 Memoria — Multi-Layer Persistent Memory for OpenClaw

The most complete memory system for OpenClaw. 20 layers of memory that work together, powered by YOUR choice of LLM.

Why Memoria?

🏗️ 20 Memory Layers (not just a fact store)

  1. Facts — Durable knowledge extracted from every conversation
  2. Procedures — HOW to do things, improves with repetition, learns from failures
  3. Knowledge Graph — Entities + relations connecting your facts
  4. Topics & Expertise — Tracks what you talk about most, specializes over time
  5. Observations — Short-term working memory for active context
  6. Error Detection 🔥 — Touch fire once, remember forever. Dangers captured on first occurrence
  7. Lifecycle — Fresh → Settled → Dormant. Nothing deleted, priority shifts naturally

🔌 Bring Your Own LLM

Configure each layer independently. Mix and match:

  • Ollama — Run gemma3, qwen3.5, llama, or any model locally (recommended)
  • LM Studio — Use any GGUF model from your local server
  • Remote APIs — OpenAI, Anthropic, OpenRouter as primary or fallback
  • Fallback chains — Ollama → LM Studio → API. If one fails, the next takes over automatically

🏠 100% Local-First

  • SQLite + FTS5 — No external database needed
  • Local embeddings — nomic-embed-text via Ollama (zero API cost)
  • Zero cloud dependency — Works offline, your data stays on your machine
  • Fallback chain — Degrades gracefully if a provider goes down

🧬 What Makes Memoria Different

FeatureMemoriaBasic memory plugins
Memory layers20 specialized layersSingle fact store
LLM choiceAny local or remote modelUsually hardcoded
Per-layer LLM config✅ Different model per layer
Procedural learning✅ Learns HOW, not just WHAT
Error detection✅ Auto-captures dangers
Knowledge graph✅ Entities + relations
Lifecycle management✅ Smart aging, never forgets❌ or simple TTL
Cost$0 with local modelsVaries

Installation

As Plugin (recommended — one command)

openclaw plugins install clawhub:memoria-plugin

This installs Memoria from the ClawHub registry. No manual steps needed.

From source (for contributors / advanced users)

If you prefer to inspect the code first:

  1. Browse the repository: github.com/Primo-Studio/openclaw-memoria
  2. Review the source code, especially index.ts (main entrypoint) and openclaw.plugin.json (config schema)
  3. Clone and install:
cd ~/.openclaw/extensions
git clone https://github.com/Primo-Studio/openclaw-memoria.git memoria
cd memoria && npm install

Then add to your openclaw.json under plugins.entries:

{
  "memoria": { "enabled": true },
  "memory-convex": { "enabled": false }
}

Configuration

Minimal (works out of the box with Ollama)

Just install and restart. Defaults: Ollama + gemma3:4b for extraction, nomic for embeddings.

Custom LLM per layer

"memoria": {
  "enabled": true,
  "config": {
    "llm": {
      "default": { "provider": "ollama", "model": "qwen3.5:4b" },
      "procedural": { "provider": "lmstudio", "model": "your-model" },
      "graph": { "provider": "openai", "model": "gpt-4o-mini" }
    }
  }
}

Source Code

The full source is available on GitHub: Primo-Studio/openclaw-memoria

Key files:

  • index.ts — Main plugin entrypoint (hooks, extraction, recall pipeline)
  • procedural.ts — Procedural memory (how-to learning)
  • lifecycle.ts — Lifecycle management (fresh/settled/dormant)
  • scoring.ts — Temporal scoring and relevance ranking
  • selective.ts — Dedup, contradiction detection, fact quality
  • openclaw.plugin.json — Configuration schema

Feedback & Community

We'd love your feedback! Tell us how Memoria works for you:

Built with ❤️ by Primo Studio 🇬🇫 — AI tooling from French Guiana.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

OpenClaw Memory Stack

Total recall, 90% fewer tokens. The best OpenClaw memory plugin — 5-engine local search, structured fact extraction, smart dedup, cross-agent sharing, and se...

Registry Source
2121Profile unavailable
Coding

Context Hawk

Pure Python memory manager for preserving and retrieving multi-layered AI memories across sessions, topics, and time without external dependencies.

Registry SourceRecently Updated
2000Profile unavailable
Coding

workspace-backup-github

Backup AI Agent workspace to GitHub - One-click backup for OpenClaw, Claude Code, Cursor, and other AI Agent workspaces to a private GitHub repository. Suppo...

Registry SourceRecently Updated
2540Profile unavailable
Coding

🧠 Memory Never Forget 🧠

Memory system v4.13: Dual-layer structure (todos for execution + knowledge for strategy) with Dream/Refinement memory mechanisms.

Registry SourceRecently Updated
3.3K3Profile unavailable