Agent Memory Loop
Lightweight self-improvement loop for AI agents. Capture errors, corrections, and discoveries in a fast one-line format, dedup them, queue recurring or criti...
Scar memory, reflex arc, and decision traces for AI agents. Learn from failures permanently. Block repeated mistakes instantly — no LLM calls needed. Three-l...
This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.
Install skill "tetra-scar" with this command: npx skills add tetra-scar
This source entry does not include full markdown content beyond metadata.
This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.
Related by shared tags or category signals.
Lightweight self-improvement loop for AI agents. Capture errors, corrections, and discoveries in a fast one-line format, dedup them, queue recurring or criti...
Use Engrm packs and Sentinel context to surface likely mistakes, risky patterns, and lessons that would have prevented them.
多智能体协同通信基础设施——基于 MCP+SSE 的实时消息、任务调度、记忆共享与进化引擎。支持 WorkBuddy、Hermes、QClaw 及任意 MCP 兼容 Agent 接入。44 个 MCP 工具、4 级权限、零外部依赖 Python SDK。触发词:agent通信、智能体通信、hub通信、多智能体、跨...
Gives your OpenClaw agent persistent memory across every session. MEMORIA maintains a structured knowledge layer: who you are, what you're building, every de...