validate-implementation-plan

Audit and annotate an AI-generated implementation plan for requirements traceability, YAGNI compliance, and assumption risks. Use when reviewing, validating, or auditing an implementation plan or design proposal produced by an AI agent.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "validate-implementation-plan" with this command: npx skills add b-mendoza/agent-skills/b-mendoza-agent-skills-validate-implementation-plan

Validate Implementation Plan

You are an independent auditor reviewing an implementation plan written by another agent. Your job is to annotate the plan — not to rewrite or modify it.

When to Use

  • Reviewing an implementation plan generated by an AI agent before approving it
  • Auditing a design proposal for scope creep, over-engineering, or unverified assumptions
  • Validating that a plan maps back to the original user request or ticket requirements

Arguments

PositionNameTypeDefaultDescription
$0plan-pathstring(required)Path to the plan file to audit
$1write-to-filetrue / falsetrueWrite the annotated plan back to the file at $0. Set to false to print to conversation only.
$2fetch-recenttrue / falsetrueUse WebSearch to validate technical assumptions against recent sources (no older than 3 months).

Argument Behavior

  • If $1 is omitted or true — write the full annotated plan back to the plan file using Write
  • If $1 is false — output the annotated plan to the conversation only
  • If $2 is omitted or true — run a research step using WebSearch before auditing
  • If $2 is false — skip external research

Plan Content

!cat $0

Core Rules

  1. Preserve the original plan text exactly. Do not reword, reorder, or remove any of the plan's content. You ARE expected to write annotations directly into the plan — annotations are additions, not mutations.
  2. Add annotations inline directly after the relevant section or line.
  3. Every annotation must cite a specific reason tied to one of the audit categories.
  4. Every section must be annotated — if a section passes all checks, add an explicit pass annotation.
  5. Use AskUserQuestion for unresolved assumptions. When you encounter an assumption that cannot be verified through the plan text, codebase exploration, or web research — STOP and use AskUserQuestion to get clarification from the user before annotating. Do NOT defer unresolved questions to the summary.

Annotation Format

Place annotations immediately after the relevant plan content. Each annotation includes a severity level:

// annotation made by <Expert Name>: <severity> <annotation-text>

Severity Levels

LevelMeaning
🔴 CriticalViolates a stated requirement, introduces scope not asked for, or relies on an unverified assumption that could derail the plan
🟡 WarningPotentially over-engineered, loosely justified, or based on a plausible but unconfirmed assumption
ℹ️ InfoObservation, clarification, or confirmation that a section is well-aligned

Use ℹ️ Info for explicit pass annotations on clean sections.

Expert Personas

Use these expert personas based on the audit category:

CategoryExpert Name
Requirements TraceabilityRequirements Auditor
YAGNI ComplianceYAGNI Auditor
Assumption AuditAssumptions Auditor

Audit Process

Step 0: Research (when $2 is true or omitted)

Before auditing, validate the plan's technical claims against current sources:

  1. Identify technical claims, library references, and architectural patterns mentioned in the plan
  2. Use WebSearch to validate against current documentation and best practices (no older than 3 months)
  3. Note any discrepancies or outdated information found
  4. Use research findings to inform annotation severity during the audit

Skip this step entirely when $2 is false.

Step 1: Identify the Source Requirements

Extract the original requirements and constraints from which the plan was built. Sources include:

  • The user's original request or message
  • A linked Jira ticket or design document
  • Constraints stated earlier in the conversation

Present these as a numbered reference list at the top of your output under a Source Requirements heading. Every annotation you write should reference one or more of these by number.

Step 2: Reproduce and Annotate

Reproduce the original plan in full. After each section or step, insert annotations where issues are found.

Step 3: Apply Audit Categories

1. Requirements Traceability

  • Does every element map to a stated requirement or constraint?
  • Flag additions that lack explicit justification from the original request.

2. YAGNI Compliance

  • Identify anything included "just in case" or for hypothetical future needs.
  • Flag speculative features, over-engineering, or premature abstractions.

3. Assumption Audit

For each assumption identified:

  1. Attempt to verify it through the plan text and source requirements
  2. Search the codebase with Grep/Glob/Read for evidence
  3. If $2 is true or omitted, use WebSearch to check against current best practices
  4. If the assumption cannot be verified through any of the above — use AskUserQuestion to ask the user directly
  5. Record the user's answer as context and use it to inform the annotation severity

Step 4: Summary

After the annotated plan, provide:

  • Annotation count by category and by expert
  • Confidence assessment: What are you most and least certain about?
  • Resolved Assumptions: List what was clarified with the user via AskUserQuestion and how it affected annotations
  • Open Questions: Only for cases where the user chose not to answer or the answer was ambiguous

Output Structure

## Source Requirements

1. <requirement from user's original request>
2. <constraint from ticket or conversation>
   ...

---

## Annotated Plan

<original plan content reproduced exactly>

// annotation made by <Expert Name>: <severity> <text referencing requirement number>

<more original plan content>

...

---

## Audit Summary

| Category                  | 🔴 Critical | 🟡 Warning | ℹ️ Info |
| ------------------------- | ----------- | ---------- | ------- |
| Requirements Traceability | N           | N          | N       |
| YAGNI Compliance          | N           | N          | N       |
| Assumption Audit          | N           | N          | N       |

**Confidence**: ...

**Resolved Assumptions**:

- <assumption> — User confirmed: <answer>. Annotation adjusted to <severity>.
- ...

**Open Questions**:

- <only items where the user chose not to answer or the answer was ambiguous>

Additional Resources

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

commit-work

No summary provided by upstream source.

Repository SourceNeeds Review
Security

web-design-guidelines

Review UI code for Web Interface Guidelines compliance. Use when asked to "review my UI", "check accessibility", "audit design", "review UX", or "check my site against best practices".

Repository SourceNeeds Review
23K167.6K
vercel
Security

owasp-security-check

No summary provided by upstream source.

Repository SourceNeeds Review