review-core

Reusable scaffolding for review workflows with context establishment, evidence capture, and structured output

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "review-core" with this command: npx skills add athola/nm-imbue-review-core

Night Market Skill — ported from claude-night-market/imbue. For the full experience with agents, hooks, and commands, install the Claude Code plugin.

Core Review Workflow

Table of Contents

  1. When to Use
  2. Activation Patterns
  3. Required TodoWrite Items
  4. Step 1 – Establish Context
  5. Step 2 – Inventory Scope
  6. Step 3 – Capture Evidence
  7. Step 4 – Structure Deliverables
  8. Step 5 – Contingency Plan
  9. Troubleshooting

When To Use

  • Use this skill at the beginning of any detailed review workflow (e.g., for architecture, math, or an API).
  • It provides a consistent structure for capturing context, logging evidence, and formatting the final report, which makes the findings of different reviews comparable.

When NOT To Use

  • Diff-focused analysis - use diff-analysis

Activation Patterns

Trigger Keywords: review, audit, analysis, assessment, evaluation, inspection Contextual Cues:

  • "review this code/design/architecture"
  • "conduct an audit of"
  • "analyze this for issues"
  • "evaluate the quality of"
  • "perform an assessment"

Auto-Load When: Any review-specific workflow is detected or when analysis methodologies are requested.

Required TodoWrite Items

  1. review-core:context-established
  2. review-core:scope-inventoried
  3. review-core:evidence-captured
  4. review-core:deliverables-structured
  5. review-core:contingencies-documented

Step 1 – Establish Context (review-core:context-established)

  • Confirm pwd, repo, branch, and upstream base (e.g., git status -sb, git rev-parse --abbrev-ref HEAD).
  • Note comparison target (merge base, release tag) so later diffs reference a concrete range.
  • Summarize the feature/bug/initiative under review plus stakeholders and deadlines.

Step 2 – Inventory Scope (review-core:scope-inventoried)

  • List relevant artifacts for this review: source files, configs, docs, specs, generated assets (OpenAPI, Makefiles, ADRs, notebooks, etc.).
  • Record how you enumerated them (commands like rg --files -g '*.mk', ls docs, cargo metadata).
  • Capture assumptions or constraints inherited from the plan/issue so the domain-specific analysis can cite them.

Step 3 – Capture Evidence (review-core:evidence-captured)

  • Log every command/output that informs the review (e.g., git diff --stat, make -pn, cargo doc, web.run citations). Keep snippets or line numbers for later reference.
  • Track open questions or variances found during preflight; if they block progress, record owners/timelines now.

Step 4 – Structure Deliverables (review-core:deliverables-structured)

  • Prepare the reporting skeleton shared by all reviews:
    • Summary (baseline, scope, recommendation)
    • Ordered findings (severity, file:line, principle violated, remediation)
    • Follow-up tasks (owner + due date)
    • Evidence appendix (commands, URLs, notebooks)
  • validate the domain-specific checklist will populate each section before concluding.

Step 5 – Contingency Plan (review-core:contingencies-documented)

  • If a required tool or skill is unavailable (e.g., web.run), document the alternative steps that will be taken and any limitations this introduces. This helps reviewers understand any gaps in coverage.
  • Note any outstanding approvals or data needed to complete the review.

Exit Criteria

  • All TodoWrite items complete with concrete notes (commands run, files listed, evidence paths).
  • Domain-specific review can now assume consistent context/evidence/deliverable scaffolding and focus on specialized analysis.

Troubleshooting

Common Issues

Command not found Ensure all dependencies are installed and in PATH

Permission errors Check file permissions and run with appropriate privileges

Unexpected behavior Enable verbose logging with --verbose flag

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

Windows应用控制器

Windows应用控制器 - 原创技能。让AI通过自动化技术控制Windows应用程序,包括打开/关闭应用、点击按钮、填写表单、截取屏幕等操作。适用于GUI自动化、测试、数据录入等场景。

Registry SourceRecently Updated
Automation

x0x

Secure computer-to-computer networking for AI agents — gossip broadcast, direct messaging, CRDTs, group encryption. Post-quantum encrypted, NAT-traversing. E...

Registry SourceRecently Updated
Automation

clear-skills

This skill should be used when the user wants to clear, remove, or clean up AI agent rules, skills, or instruction files from their coding environment. It su...

Registry SourceRecently Updated
Automation

AI Remove Watermark

Remove watermarks from images through the Airemovewatermark API. Use when an agent needs to remove a watermark from a local image file or remote image URL, p...

Registry SourceRecently Updated
1671isees