error-driven-evolution

Structured error-to-rule learning system for AI agents. Activate when an agent makes a mistake, receives a correction from the user, or needs to check past lessons before making a decision. Converts errors into executable rules (not reflections) stored in lessons.md, and enforces pre-decision rule scanning to prevent repeat mistakes. Supports sharing anonymized lessons to a community repository.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "error-driven-evolution" with this command: npx skills add marsnavi/error-driven-evolution

Error-Driven Evolution

Turn mistakes into rules. Not reflections, not apologies — rules.

Core Concept

When an agent makes an error or gets corrected, it must:

  1. Extract a rule (not a story)
  2. Write it to lessons.md in its workspace
  3. Scan relevant rules before future decisions in that domain
  4. Optionally share anonymized rules to the community repo

lessons.md Format

File location: {workspace}/lessons.md

Each rule follows this structure:

### [CATEGORY] Short imperative title

- **When**: The specific situation/trigger
- **Do**: The correct action (imperative, specific)
- **Don't**: The wrong action that was taken
- **Why**: One sentence — what went wrong
- **Added**: YYYY-MM-DD

Categories

TagScope
DATAQuerying, interpreting, presenting data
COMMSMessaging, tone, audience, channels
SCOPERole boundaries, doing others' work
EXECTask execution, tools, file ops
JUDGMENTDecisions, priorities, assumptions
CONTEXTMemory, context window, info management
SAFETYSecurity, privacy, destructive ops
COLLABMulti-agent coordination, handoffs

When to Record

Record a rule when:

  1. User corrects you — explicit feedback
  2. User overrides your output — they redo your work
  3. Same error twice — second occurrence MUST become a rule
  4. Near miss — you catch yourself about to repeat a mistake

Do NOT record: one-off technical glitches, user preference changes (those go in MEMORY.md).

How to Record

  1. Stop. Don't apologize at length.
  2. Identify the category.
  3. Write the rule in imperative form.
  4. Append to lessons.md (never overwrite).
  5. Confirm briefly: "Added to lessons: [title]"

Pre-Decision Scan

Before acting, scan lessons.md for applicable rules:

About to...Check
Present data[DATA]
Send message / write report[COMMS] + [SCOPE]
Make suggestion[JUDGMENT] + [SCOPE]
Execute multi-step task[EXEC] + [CONTEXT]
Start new sessionAll (skim titles)

Scan = read ### [TAG] headers, check if any When matches your situation.

Community Sharing

Share anonymized lessons to help other agents: https://github.com/anthropic-ai/agent-lessons

See references/community-sharing.md for the anonymization and submission process.

Setup

  1. Create lessons.md in your workspace:
# Lessons
Rules extracted from mistakes. Append after failing, scan before deciding.
  1. Copy community/top-100.md to your workspace as top-100.md — this is your pre-installed immune system. Small enough to skim on startup, covers the most common and costly mistakes across all agent deployments.

  2. Add to your startup instructions:

- On startup: skim top-100.md titles (pre-installed community lessons)
- On correction/failure: append rule to lessons.md
- Before decisions: scan lessons.md + top-100.md for [CATEGORY] rules

Loading Strategy

Your agent has two rule files:

FileSourceLoad on startupSize target
lessons.mdYour own mistakesYes, fullyGrows organically
top-100.mdCommunity top picksYes, skim titles~8KB, curated

For deeper community search (beyond top-100), query community/{category}.md files on-demand when facing an unfamiliar situation.

Maintenance

When lessons.md exceeds 50 rules: review for duplicates, retire obsolete rules (mark don't delete), consider splitting by category.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

NEXO Brain

Cognitive memory system for AI agents — Atkinson-Shiffrin memory model, semantic RAG, trust scoring, and metacognitive error prevention. Gives your agent per...

Registry SourceRecently Updated
Automation

Skill 编排核心

Skill 编排核心 - 上下文管理、流程编排、质量保证

Registry SourceRecently Updated
Automation

How To Use Agent

Use when improving an agent's own memory, skills, prompts, runtime rules, tool policies, AGENTS.md/agent.md files, or when adapting ideas from other agent pr...

Registry SourceRecently Updated
Automation

sciverse agent tools

SciVerse 学术文献检索:按结构化条件查元数据、自然语言语义检索片段、按字节读取原文。适合需要权威学术文献支撑的 RAG 与 agent 工作流。

Registry SourceRecently Updated