Honcho + Memory Mux Setup
This skill installs and enables:
@honcho-ai/openclaw-honcho(Honcho tools + integration)@alloralabs/honcho-memory-mux(multiplexed memory behavior)
Then it migrates legacy memory files, updates agent docs, and validates runtime behavior.
What “memory mux” means in practice
After setup, memory behavior should be:
- Primary retrieval: Honcho tools (
honcho_context,honcho_search, etc.) - Secondary retrieval: local memory files (
MEMORY.md,memory/*.md) viamemory_search/memory_get - Durability: pre-compaction flush writes significant session learnings to
memory/YYYY-MM-DD.md - Fallback safety: if Honcho is unavailable, file-based memory still works
This is intentionally complementary, not a hard cutover.
Step 0: Preflight and safety checks
Before doing anything destructive:
- Resolve workspace root from:
WORKSPACE_ROOT~/.openclaw/openclaw.json(agent.workspaceoragents.defaults.workspace)~/.openclaw/workspace
- Confirm directory exists
- Inform user this process can upload memory files to Honcho
Step 1: Install + enable Honcho plugin
Use OpenClaw plugin manager (not workspace npm install):
openclaw plugins install @honcho-ai/openclaw-honcho
openclaw plugins enable openclaw-honcho
If gateway logs show missing SDK deps:
cd ~/.openclaw/extensions/openclaw-honcho
npm install
Restart gateway if required.
Step 2: Install + enable memory mux extension
Install and enable the memory-mux extension:
openclaw plugins install @alloralabs/honcho-memory-mux
openclaw plugins enable @alloralabs/honcho-memory-mux
If the extension is local/unpublished, install from path or git URL per your release flow.
After enabling:
- Verify plugin appears in
openclaw plugins list - Verify gateway boots cleanly (no extension load errors)
- Verify both Honcho tools and
memory_search/memory_getare present at runtime
Step 3: Verify Honcho connectivity (self-hosted)
- Set
HONCHO_BASE_URL(example:http://localhost:8000) HONCHO_API_KEYmay be optional depending on auth config
If user needs local Honcho quickly:
git clone https://github.com/plastic-labs/honcho
cd honcho
cp .env.template .env
cp docker-compose.yml.example docker-compose.yml
docker compose up -d
Do not migrate until connection is verified.
Step 4: Detect and classify legacy memory files
Scan workspace for:
User/owner memory inputs
USER.mdIDENTITY.mdMEMORY.mdmemory/**(recursive)canvas/**(recursive)
Agent/self memory inputs
SOUL.mdAGENTS.md(andAGENT.mdif present in this workspace)TOOLS.mdBOOTSTRAP.mdHEARTBEAT.md
Present a clear preview before upload:
- count files per class
- total bytes per class
- any unreadable/skipped files
Ask for explicit confirmation.
Step 5: Upload to Honcho using peer-separated sessions
Create/ensure:
- workspace (
HONCHO_WORKSPACE_IDoropenclaw) - peer
owner - peer
openclaw - migration session (e.g.
migration-upload-<timestamp>)
Add both peers to session, then upload files:
- user/owner files →
owner - agent/self files →
openclaw
If any upload fails:
- stop
- report exact file and error
- keep local files unchanged
Report summary:
- uploaded user file count
- uploaded agent file count
- failed/skipped files
Step 6: Update workspace docs (including AGENT.md compatibility)
Update/create memory instructions in:
AGENTS.mdSOUL.mdBOOTSTRAP.mdAGENT.md(if this workspace uses it)
Preserve custom content; only replace memory-specific sections.
Required policy text to add
-
Memory retrieval order
- First try
honcho_context/honcho_search/honcho_recall - If needed, use
memory_search+memory_getfor local file recall - Use
honcho_sessionfor same-session recall
- First try
-
Memory write behavior
- Important durable facts can be written via
honcho_write - Pre-compaction flush writes concise durable notes to
memory/YYYY-MM-DD.md
- Important durable facts can be written via
-
Failure handling
- If Honcho unavailable: continue with local memory tools, state degraded mode
- Never claim memory certainty without retrieval evidence
-
Transparency
- When memory influences an answer, be able to explain source class (Honcho recall vs local file recall vs current-session context)
Step 7: Add/refresh practical memory operating conventions
Ensure docs reflect these conventions the agent has been using:
- Do not preload everything into active reasoning if not needed
- Prefer targeted retrieval over huge static context injection
- Treat disagreement as signal: preserve conflicting observations; do not silently average
- Use pre-compaction flushes to avoid losing high-value context during long sessions
- Separate memory types conceptually:
- durable facts
- episodic notes
- scratch/working notes
Even if physical storage is mixed, behavior should preserve this distinction.
Step 8: Optional (recommended) no-fork observability add-on
Without modifying OpenClaw gateway source, the memory-mux extension can emit:
- a small memory load manifest per session start (what memory sources were pulled)
- a pre-compaction write report (what was persisted)
Store under:
memory/manifests/YYYY-MM-DD/<session>.json
This makes memory behavior debuggable while keeping upstream updates painless.
Step 9: Validate end-to-end behavior
Run a quick live test:
- Ask agent about a known historical fact only in migrated memory
- Confirm retrieval path works (Honcho first, fallback available)
- Trigger/perform pre-compaction flush
- Confirm new note appears in
memory/YYYY-MM-DD.md - Re-ask fact in a new turn/session to validate durability
Wrap-up report to user
Provide:
- installed/enabled plugin list
- Honcho connectivity mode (managed vs self-hosted)
- files discovered, uploaded, skipped
- docs updated (
AGENTS.md,SOUL.md,BOOTSTRAP.md,AGENT.mdif present) - any follow-up actions
Reference docs:
Guardrails
- Never delete user memory files automatically unless user explicitly asks
- Never proceed with migration on failed Honcho connectivity
- Never silently ignore upload errors
- Prefer reversible edits and clear summaries