OpenClaw Advanced Memory

# OpenClaw Advanced Memory

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "OpenClaw Advanced Memory" with this command: npx skills add jtil4201/openclaw-advanced-memory

OpenClaw Advanced Memory

Three-tier AI agent memory system — real-time capture, vector search, and LLM-curated long-term recall.

What It Does

Gives your OpenClaw agent persistent, searchable memory that survives across sessions:

  • HOT tier — Redis buffer captures conversation turns in real-time (every 30s)
  • WARM tier — Qdrant vector store with chunked, embedded conversations (searchable, 7-day retention)
  • COLD tier — LLM-curated "gems" extracted nightly (decisions, lessons, milestones — stored forever)

Requirements

  • Qdrant — vector database (Docker recommended)
  • Redis — buffer queue (Docker recommended)
  • Ollama — local embeddings (snowflake-arctic-embed2) + curation LLM (qwen2.5:7b)
  • Python 3.10+ with qdrant-client, redis, requests

No cloud APIs. No subscriptions. Runs entirely on your own hardware.

Setup

# 1. Start Qdrant + Redis (Docker)
docker compose up -d

# 2. Pull Ollama models
ollama pull snowflake-arctic-embed2
ollama pull qwen2.5:7b

# 3. Run the installer
bash scripts/install.sh

The installer sets up Qdrant collections, installs a systemd capture service, and configures cron jobs.

Edit connection hosts at the top of each script if your infra isn't on localhost.

Usage

# Search your memory
./recall "what did we decide about pricing"
./recall "deployment" --project myproject --tier cold -v

# Check system status
./mem-status

# Force a warm flush or curation run
./warm-now
./curate-now 2026-03-01

Schedules

ComponentScheduleWhat It Does
mem-captureAlways running (systemd)Watches transcripts → Redis
mem-warmEvery 30 min (cron)Redis → Qdrant warm
mem-curateNightly 2 AM (cron)Warm → LLM curation → Qdrant cold

How Curation Works

Every night, a local LLM (qwen2.5:7b via Ollama) reads the day's conversations and extracts structured gems:

{
  "gem": "Chose DistilBERT over TinyBERT — 99.69% F1, zero false positives",
  "context": "A/B tested both architectures on red team suite",
  "categories": ["decision", "technical"],
  "project": "guardian",
  "importance": "high"
}

Only decisions, milestones, lessons, and people info make the cut. Casual banter and debugging noise get filtered out.

Links

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

Tinmem Memory System

Provides persistent memory management for storing, retrieving, updating, and deleting user-related information across conversations in OpenClaw AI.

Registry SourceRecently Updated
0233
Profile unavailable
Automation

一步完成进化

Use when you need to stand up or standardize a fresh OpenClaw setup as the Fire Dragon Fruit Architecture: one strong main, one isolated rescue, layered file...

Registry SourceRecently Updated
055
Profile unavailable
Automation

Chanai Search Workflow

External web search workflow for Chinese and global public-web information. Use when searching news, flights, prices, products, docs, tutorials, community po...

Registry SourceRecently Updated
046
Profile unavailable