claude-review

Self-review quality gate using Claude CLI. When the user says 'review your work', 'use review-work', or 'check your output', run review-work with the task summary, --context pointing to your output file/folder, and --skill pointing to the skill used (if any). You determine all arguments yourself — the user does NOT need to specify them. Requires `claude` CLI installed.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "claude-review" with this command: npx skills add PaperBoardOfficial/claude-review

claude-review — Self-Review Quality Gate

Uses Claude CLI (claude --print) as an independent reviewer to catch errors, missed requirements, and quality issues in your work before delivering to the user.

How It Works

  1. You complete your task and save output to file(s)
  2. review-work sends your work to a separate Claude instance for independent review
  3. If a skill was used, the reviewer checks against the skill's specific requirements
  4. If LESSONS.md exists, the reviewer checks for repeat mistakes
  5. Issues are returned with severity ratings (critical / major / minor) and a PASS/FAIL verdict
  6. You fix issues and re-review until clean

The reviewer is a separate Claude instance — it has no context of your conversation, so it reviews purely on merit.

Auto-learning: When a review fails, critical and major issues are automatically logged to LESSONS.md. This file is auto-included in future reviews so the reviewer checks for repeat mistakes.

Prerequisites

  • claude CLI must be installed and available in PATH (npm install -g @anthropic-ai/claude-code)
  • Valid API key configured for Claude CLI

Command

review-work "<task_summary>" --context <file_or_folder> [--skill <file_or_folder>]
ArgumentRequiredDescription
task_summaryYesWhat the work was supposed to accomplish
--context <path>YesFile or folder containing the work to review. Can also include reference material, test output, or anything relevant.
--skill <path>NoSKILL.md or skill folder used for this task. The reviewer uses its requirements as a definition of done.

Auto-included (no flag needed):

  • LESSONS.md — if it exists, always included so the reviewer checks for repeat mistakes

All paths accept both files and folders. Claude reads all file types natively (text, images, PDFs, code).

Workflow

When instructed to review your work:

  1. Identify every file you created or modified
  2. Run review-work with the task summary, --context pointing to your output, and --skill if a skill was used
  3. Read the review output — look for VERDICT: PASS or FAIL
  4. Fix any critical or major issues
  5. Re-run review-work after fixing (up to 3 cycles)
  6. Report the review summary in your final output

Examples

Review a single file:

review-work "Write a Python email validator" --context /tmp/email.py

Review with skill context (reviewer verifies against skill requirements):

review-work "Write an SEO blog about class action lawsuits" --context /tmp/blog.md --skill ~/.openclaw/workspace/skills/seo-content-writer/SKILL.md

Review an entire project folder:

review-work "Build a todo app with React" --context /tmp/todo-app/ --skill ~/skills/fullstack/SKILL.md

Review with extra context (reference articles, test output, etc.):

# Put your output + reference material in one folder
review-work "Write a blog matching MoneyPilot tone" --context /tmp/blog-project/

Rules

  1. Review every file you created or modified — not just the main one
  2. If a skill was used for the task, always pass --skill
  3. If the review reports critical or major issues → fix them → re-review (up to 3 cycles)
  4. Only finish after the verdict is PASS (zero critical/major issues)
  5. Include the review summary in your final output
  6. After 3 failed cycles, finish but attach the full review report

What NOT to Do

  • Do NOT ask the user for arguments — you already know what you created and which skill you used
  • Do NOT say "review passed" without actually running the command
  • Do NOT fabricate review results — the command produces real output
  • Do NOT forget --skill when a skill was involved in the task

LESSONS.md

Failed reviews are auto-logged to LESSONS.md (default: ~/.openclaw/workspace/LESSONS.md). Override the path with the LESSONS_FILE environment variable.

This file is also auto-read on every review, so the reviewer checks: "are any past mistakes being repeated?"

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

Review

Professional reviewer and critic. Trigger whenever the user wants structured feedback on anything: documents, plans, code, decisions, strategies, designs, pi...

Registry SourceRecently Updated
0125
Profile unavailable
Security

Code Review

AI-powered code review that combines fast local static analysis with deep AI reasoning. Catches bugs, security vulnerabilities, performance issues, and style...

Registry SourceRecently Updated
0134
Profile unavailable
Security

Skill Analyzer

Analyzes OpenClaw skills for functionality, security, usability, documentation, maintainability, and best practices to provide quality scores and improvement...

Registry SourceRecently Updated
0155
Profile unavailable