judge

Quality review skill for verifying complex changes against criteria. Use for multi-file changes, new features, or before important commits. Skip for trivial fixes and quick iterations.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "judge" with this command: npx skills add aussiegingersnap/cursor-skills/aussiegingersnap-cursor-skills-judge

Judge

Quality review and evaluation skill that verifies completed work against defined criteria. Part of the two-tier multi-agent architecture where Judge evaluates worker output.

Contract

Inputs:

  • Completed work output (files, changes, artifacts)
  • Original acceptance criteria or success criteria
  • Context about what was attempted

Outputs:

  • Pass/fail determination
  • List of issues found (if any)
  • Recommendations for fixes (if failing)

Success Criteria:

  • All acceptance criteria evaluated
  • Clear pass/fail determination provided
  • Actionable feedback given for any failures

When to Use

Invoke the judge skill:

  1. After atomic skill completion - Before marking work as done
  2. Before committing - Final quality gate
  3. After build/test phases - Verify implementation meets spec
  4. When reviewing generated code - Catch issues before integration

When NOT to Use

Skip the judge skill for:

  • Trivial changes - Single-line fixes, typo corrections
  • Mid-workflow - Don't interrupt atomic skills; judge at phase boundaries
  • Exploratory work - When user is iterating quickly and explicitly skipping review
  • User-requested skip - When user says "just do it" or "skip review"

Review Process

Step 1: Gather Context

Collect the materials needed for review:

  1. The output - What was produced (files, code, documents)
  2. The criteria - What was supposed to be achieved (acceptance criteria, spec)
  3. The scope - What was in/out of scope for this work

Step 2: Evaluate Against Criteria

For each acceptance criterion:

  1. Check if the criterion is met
  2. Note any partial completion
  3. Document evidence (file paths, line numbers, test results)

Use this evaluation format:

## Review: [Work Description]

### Criteria Evaluation

| Criterion | Status | Evidence |
|-----------|--------|----------|
| [Criterion 1] | PASS/FAIL/PARTIAL | [Evidence] |
| [Criterion 2] | PASS/FAIL/PARTIAL | [Evidence] |

### Issues Found

1. [Issue description]
   - **Severity**: Critical/Major/Minor
   - **Location**: [File/line]
   - **Fix**: [Recommended action]

### Verdict

**PASS** / **FAIL** / **PASS WITH NOTES**

[Summary of decision]

Step 3: Apply Review Dimensions

Evaluate across these dimensions based on work type:

For Code Changes

DimensionCheck
CorrectnessDoes it do what was specified?
CompletenessAre all criteria addressed?
QualityNo obvious bugs, edge cases handled?
StyleFollows project conventions?
ScopeNo scope creep beyond criteria?

For Document Generation

DimensionCheck
AccuracyInformation is correct?
CompletenessAll required sections present?
FormatFollows expected structure?
ClarityUnderstandable to target audience?

For Infrastructure Changes

DimensionCheck
FunctionalityWorks as expected?
SecurityNo exposed secrets, proper permissions?
IdempotencyCan be run again safely?
DocumentationChanges documented?

Severity Levels

LevelDefinitionAction
CriticalBlocks functionality, security issue, data loss riskMust fix before proceeding
MajorSignificant deviation from spec, poor UXShould fix before commit
MinorStyle issues, minor improvementsCan note for future

Verdicts

PASS

All criteria met, no critical/major issues. Work can proceed.

FAIL

Critical issues found OR acceptance criteria not met. Work must be revised.

Provide:

  • Specific issues with locations
  • Recommended fixes
  • Which criteria failed

PASS WITH NOTES

All criteria met, but minor issues noted. Work can proceed with awareness of noted items.

Integration with Orchestrators

When used in orchestrated workflows:

  1. Orchestrator invokes atomic skill - Work is produced
  2. Orchestrator invokes judge - Work is evaluated
  3. If PASS - Proceed to next phase
  4. If FAIL - Return to previous skill with feedback

This creates the planner/worker/judge pattern that scales.

Quick Review Checklist

For rapid reviews, use this checklist:

## Quick Review

- [ ] All acceptance criteria addressed
- [ ] No obvious bugs or errors
- [ ] Follows project conventions
- [ ] No scope creep
- [ ] Ready to commit/proceed

**Verdict**: PASS / FAIL

References

See references/review-criteria.md for detailed review criteria by skill type.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

tools-repo-review

No summary provided by upstream source.

Repository SourceNeeds Review
General

db-postgres

No summary provided by upstream source.

Repository SourceNeeds Review
General

ui-design-system

No summary provided by upstream source.

Repository SourceNeeds Review
General

ui-principles

No summary provided by upstream source.

Repository SourceNeeds Review