Six-Layer Memory
Use this skill when the user wants a workspace to keep memory proactively instead of relying on chat context alone.
What this skill sets up
- HOT:
memory/SESSION-STATE.md - WARM: local vector/index refresh when source material changes
- COLD:
memory/decisions/ - CURATED:
MEMORY.mdplus daily logs - CLOUD: optional SuperMemory sync
- AUTO: optional Mem0-backed fact extraction
Workflow
- Pick the target workspace.
- Run
scripts/install_workspace.sh <workspace>. - Configure AUTO: Use LM Studio if the machine already has a local chat model and embedding model. Use Ollama only if both the chat model and embedding model are actually available.
- Configure CLOUD if the user wants cross-device recall:
Put a SuperMemory key in
memory/auto-extract/.supermemory_key. - Add periodic execution:
59 23 * * * /usr/bin/python3 <workspace>/memory/auto_memory_6layer.py --workspace <workspace> --daily --source wal-daily
0 6 * * * /usr/bin/python3 <workspace>/memory/auto_memory_6layer.py --workspace <workspace> --daily --source memory-daily-sync
*/30 * * * * /usr/bin/python3 <workspace>/memory/auto_memory_6layer.py --workspace <workspace> --source memory-sync
- Validate with
memory/check_memory_layers.sh.
Bundled scripts
scripts/install_workspace.shscripts/auto_memory_6layer.pyscripts/supermemory_sync.pyscripts/mem0_sync.pyscripts/memory_writer.pyscripts/check_memory_layers.sh
Notes
- This skill is designed per workspace. Repeat installation for each agent workspace.
- Do not overwrite HOT state with synthetic “all good” status text.
- Prefer one canonical HOT file path:
memory/SESSION-STATE.md. - Keep
MEMORY.mdfor durable facts only. Put operational notes elsewhere.
References
- Beginner setup guide (Chinese + English):
references/beginner-guide-zh-en.md - Release copy draft (Chinese + English):
references/release-notes-zh-en.md