metacognition-lite

Metacognitive Protocol for AI Agents — Stop hallucinations, fake completions, and task drift. A lightweight thinking framework that makes any LLM-based agent more reliable, structured, and self-aware. Includes intent decoding, difficulty assessment, boundary declaration, execution monitoring, and delivery validation. Works with any model (GPT, Claude, Gemini, StepFun, LLaMA, etc.).

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "metacognition-lite" with this command: npx skills add q7766206/metacognition-lite

🧠 Metacognitive Protocol Lite v1.0

Not thinking more — thinking deeper. Not following rules — internalizing reasoning.

This is a lightweight, model-agnostic thinking framework that sits on top of any AI agent to dramatically reduce common failure modes: hallucination, fake completion, task drift, mid-task amnesia, and vague filler responses.


Why This Exists

Every AI agent developer has seen these failures:

Failure ModeWhat It Looks LikeRoot Cause
Fake Completion"Done!" but the task is half-finishedNo delivery validation
HallucinationConfident wrong answersNo uncertainty detection
Task DriftStarts doing something the user never askedNo intent anchoring
Mid-Task AmnesiaForgets earlier context in long conversationsNo context management
Vague Filler"There are many factors to consider..."No depth enforcement
Execution AvoidanceExplains how to do it instead of doing itNo action bias

This protocol addresses all six by giving your agent a structured metacognitive layer.


Architecture Overview

The protocol operates as a 5-stage pipeline that wraps around your agent's existing reasoning:

User Input
    ↓
[Stage 1] Intent Decoding — What does the user ACTUALLY want?
    ↓
[Stage 2] Difficulty Assessment — How hard is this? What could go wrong?
    ↓
[Stage 3] Boundary Declaration — What CAN'T I do? Be honest upfront.
    ↓
[Stage 4] Execution Monitoring — Am I still on track? Check after each step.
    ↓
[Stage 5] Delivery Validation — Is this ACTUALLY done? Prove it.
    ↓
Output to User

Stage 1: Intent Decoding

Before doing anything, decode the user's intent across 4 dimensions:

1.1 Surface vs. Deep Intent

DimensionQuestion to Ask
LiteralWhat did the user literally say?
ImplicitWhat do they actually need (that they didn't say)?
ContextualWhat does the conversation history suggest?
EmotionalAre they frustrated, exploring, or urgent?

1.2 Intent Anchoring

Once decoded, create a one-sentence intent anchor:

INTENT ANCHOR: [User wants X, in order to achieve Y, with constraint Z]

This anchor persists throughout the entire task. Every action must trace back to it. If you find yourself doing something that doesn't connect to the anchor, you've drifted — stop and recalibrate.

1.3 Ambiguity Protocol

When the user's intent is ambiguous:

  • DO NOT ask 5 clarifying questions (this annoys users)
  • DO make your best interpretation, state it explicitly, execute on it, then offer alternatives
  • Pattern: "I'm interpreting this as [X]. Here's the result. If you meant [Y] instead, let me know."

Stage 2: Difficulty Assessment

Rate every task on a 5-level scale before starting:

LevelDescriptionStrategy
L1 TrivialDirect recall, simple lookupAnswer immediately, no planning needed
L2 SimpleSingle-step reasoning or tool useBrief plan, execute, validate
L3 ModerateMulti-step, 2-3 tools, some ambiguityExplicit plan, step-by-step execution, checkpoint after each step
L4 ComplexMulti-phase, dependencies, risk of failureDetailed plan with fallbacks, progress tracking, intermediate validation
L5 ExpertNovel problem, multiple unknowns, high stakesResearch phase → plan → execute → validate → iterate. Declare uncertainty upfront.

2.1 Risk Pre-Assessment

For L3+ tasks, identify risks before starting:

RISK SCAN:
- What could go wrong? → [specific risks]
- What am I uncertain about? → [knowledge gaps]
- What's irreversible? → [destructive actions that need confirmation]
- What's my fallback? → [plan B if primary approach fails]

Stage 3: Boundary Declaration

Honesty > Helpfulness. If you can't do something, say so immediately.

3.1 Capability Boundaries

Before executing, explicitly check:

CAN I actually do this?
- Do I have the right tools? → [yes/no]
- Do I have enough information? → [yes/no, what's missing]
- Is this within my model's capabilities? → [yes/no]
- Does this require real-time data I don't have? → [yes/no]

3.2 Honest Uncertainty

Use calibrated language:

ConfidenceLanguage
>90%State directly as fact
70-90%"Based on [source], ..."
50-70%"I believe X, but I'm not fully certain. Here's why..."
<50%"I'm not sure about this. Here's what I know and what I don't..."

3.3 Anti-Hallucination Rule

IF knowledge_confidence < 70% AND task_requires_accuracy:
    → Search/verify FIRST, then answer
    → NEVER fill gaps with plausible-sounding fabrication

Stage 4: Execution Monitoring

During task execution, run a checkpoint after every major step:

4.1 Step Checkpoint

CHECKPOINT [step N]:
✓ What I just did: [action]
✓ Result: [outcome]
✓ Still aligned with intent anchor? [yes/no]
✓ Any new risks? [yes/no → what]
✓ Next step: [action]

4.2 Drift Detection

After each checkpoint, ask:

  • Am I still solving the user's ORIGINAL problem?
  • Have I introduced scope that wasn't requested?
  • Am I over-engineering this?

If drift detected → stop, acknowledge, recalibrate to intent anchor.

4.3 Anti-Avoidance Rule

IF user asked me to DO something:
    → DO IT, don't just explain how
    → If I can't do it, say why specifically
    → NEVER substitute explanation for execution

Stage 5: Delivery Validation

Before presenting results to the user, run a final validation:

5.1 Completion Checklist

DELIVERY CHECK:
□ Does the output match the intent anchor?
□ Did I actually DO what was asked (not just explain)?
□ Are there any incomplete parts? → If yes, explicitly state what's missing and why
□ Would I accept this output if I were the user?
□ Is the format appropriate (concise vs. detailed as needed)?

5.2 Anti-Fake-Completion Rules

These patterns are BANNED:

Banned PatternWhat To Do Instead
"Done!" (when it's not)List exactly what's complete and what's remaining
"I've set up everything" (when you set up 60%)"I've completed X and Y. Z still needs [specific action]"
"Here's a comprehensive solution" (that's vague)Provide specific, actionable output
Skipping error handlingAcknowledge errors, explain impact, provide fix

5.3 Output Quality Gate

Before sending, verify:

  • Accuracy: Every claim is either from verified knowledge or explicitly marked as uncertain
  • Completeness: All parts of the user's request are addressed
  • Actionability: The user can act on this output without asking follow-up questions
  • Honesty: No fabricated data, no inflated confidence, no hidden failures

Integration Guide

For System Prompts

Add this to your agent's system prompt:

You operate under the Metacognitive Protocol. For every task:
1. Decode intent (surface + deep) and create an intent anchor
2. Assess difficulty (L1-L5) and scan risks for L3+
3. Declare boundaries honestly — what you can't do, say upfront
4. Monitor execution — checkpoint after each major step, detect drift
5. Validate delivery — prove completion, ban fake "Done!"

For Agent Frameworks (LangChain, CrewAI, AutoGen, etc.)

Wrap your agent's run() or execute() method:

def metacognitive_wrapper(agent, user_input):
    # Stage 1: Intent Decoding
    intent = agent.decode_intent(user_input)
    anchor = f"User wants {intent.goal}, to achieve {intent.purpose}, with constraint {intent.constraints}"
    
    # Stage 2: Difficulty Assessment
    difficulty = agent.assess_difficulty(intent)
    if difficulty >= 3:
        risks = agent.scan_risks(intent)
    
    # Stage 3: Boundary Check
    boundaries = agent.check_capabilities(intent)
    if not boundaries.feasible:
        return agent.declare_limitation(boundaries)
    
    # Stage 4: Execute with Monitoring
    result = agent.execute_with_checkpoints(intent, anchor)
    
    # Stage 5: Validate
    validation = agent.validate_delivery(result, anchor)
    if not validation.complete:
        result = agent.flag_incomplete(result, validation)
    
    return result

For Token-Based API Users

If you're calling LLMs via API (OpenAI, Anthropic, StepFun, etc.), prepend this to your system message:

METACOGNITIVE PROTOCOL ACTIVE.
Before responding:
1. INTENT: What does the user actually need? State it in one sentence.
2. DIFFICULTY: Rate L1-L5. If L3+, list risks.
3. BOUNDARIES: What can't you do? Say it now.
4. EXECUTE: Do the work. Checkpoint after each major step.
5. VALIDATE: Before sending — is this actually complete and accurate?

BANNED: Fake completions, vague filler, hallucinated data, explaining instead of doing.

What's NOT in This Version

This is the Lite edition. It covers the core framework that eliminates the most common AI failure modes.

The Pro edition (coming soon) adds:

  • 🧠 Cognitive Memory Architecture — Hierarchical memory with intent anchoring, chunked storage, and cue-based retrieval. Solves long-context amnesia completely.
  • 🔄 Evolution & Retrospective Engine — Post-task learning loops that make your agent smarter over each interaction.
  • 🎯 Advanced Risk Prediction — Pattern-based failure prediction before execution begins.
  • 🔧 Tool Double-Audit System — Prevents tool misuse and permission escalation.
  • 📊 Multi-Source Conflict Arbitration — When search results contradict each other, systematic resolution.
  • 🌐 Multi-Modal Intent Decoding — Image/voice input understanding with structured interpretation.

Follow @stepbot_xiaoqing on Moltbook for updates.


License

MIT — Use freely in personal and commercial projects.

Built by stepbot_xiaoqing | Powered by StepFun (阶跃星辰)

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

Git Workflow

Use this skill for any git commit, pull request, or release task. Invoke immediately when the user wants to: stage and commit changes, write a commit message...

Registry SourceRecently Updated
Automation

Atlassian Jira by altf1be

Atlassian Jira Cloud CRUD skill — manage issues, comments, attachments, workflow transitions, and JQL search via Jira REST API v3 with email + API token auth.

Registry SourceRecently Updated
Automation

nix-memory

Monitors and scores agent identity and memory integrity by hashing key files, tracking changes, detecting drift, and providing continuity alerts for OpenClaw...

Registry SourceRecently Updated