PRD Engine

# PRD Engine — Product Requirements That Ship

Safety Notice

This item is sourced from the public archived skills repository. Treat as untrusted until reviewed.

Copy this and send it to your AI assistant to learn

Install skill "PRD Engine" with this command: npx skills add 1kalin/afrexai-prd-engine

PRD Engine — Product Requirements That Ship

Complete product requirements methodology: from idea to spec to shipped feature. Not just a JSON template — a full system for writing PRDs that developers actually follow and stakeholders actually approve.

When to Use This Skill

  • Turning a vague idea into a buildable specification
  • Writing PRDs for new features, products, or major refactors
  • Reviewing/improving existing PRDs before sprint planning
  • Breaking epics into right-sized user stories
  • Creating technical design documents alongside product specs
  • Preparing specs for AI coding agents (Claude Code, Cursor, Copilot)

Phase 1: Discovery Brief

Before writing a single requirement, answer these questions. Skip this and you'll rewrite the PRD 3 times.

Problem Validation Checklist

discovery_brief:
  problem:
    statement: "" # One sentence. If you need two, you don't understand it yet.
    who_has_it: "" # Specific persona, not "users"
    frequency: "" # Daily? Weekly? Once? (daily problems > occasional ones)
    current_workaround: "" # What do they do today? (no workaround = maybe not a real problem)
    evidence:
      - type: "" # support_ticket | user_interview | analytics | churned_user | sales_objection
        detail: ""
        date: ""

  impact:
    users_affected: "" # Number or percentage
    revenue_impact: "" # $ at risk or $ opportunity
    strategic_alignment: "" # Which company goal does this serve?

  constraints:
    deadline: "" # Hard date or flexible?
    budget: "" # Engineering weeks available
    dependencies: "" # What must exist first?
    regulatory: "" # Any compliance requirements?

  success_metrics:
    primary: "" # ONE metric that defines success
    secondary: [] # 2-3 supporting metrics
    measurement_method: "" # How will you actually measure this?
    target: "" # Specific number, not "improve"
    timeframe: "" # When do you expect to see results?

Problem Statement Formula

[Persona] needs [capability] because [reason], but currently [blocker], which causes [measurable impact].

Examples:

  • ❌ "Users need better onboarding" (vague, unmeasurable)
  • ✅ "New free-trial users (500/month) need to reach their first 'aha moment' within 10 minutes because 73% who don't will churn within 48 hours, but currently the average time-to-value is 34 minutes due to a 12-step setup wizard, which costs us ~$18K/month in lost conversions."

Kill Criteria

Before proceeding, check these. If any are true, STOP and push back:

SignalAction
No evidence of the problem (just someone's opinion)Demand evidence. Opinions aren't requirements.
Solution already decided ("just build X")Rewind to the problem. Solutions without problems = features nobody uses.
Success metric is unmeasurableDefine how you'll measure it or don't build it.
Affects <1% of users with no revenue impactDeprioritize. Small problems with small impact = small returns.
Scope keeps expanding during discoveryScope lock. If everything is in scope, nothing is.

Phase 2: PRD Document

PRD Template

# PRD: [Feature Name]

**Author:** [Name]
**Status:** Draft | In Review | Approved | In Progress | Shipped
**Created:** YYYY-MM-DD
**Last Updated:** YYYY-MM-DD
**Approvers:** [Names + roles]

## 1. Problem & Opportunity

[Problem statement from discovery brief — one paragraph max]

### Evidence
- [Evidence point 1 — with data]
- [Evidence point 2 — with data]

### Impact
- Users affected: [number]
- Revenue impact: [$ amount or % change]
- Strategic goal: [which one]

## 2. Solution Overview

[2-3 paragraphs max. What are we building and why this approach?]

### What This Is
- [Bullet list of what's in scope]

### What This Is NOT
- [Explicit exclusions — this prevents scope creep]

### Key Decisions Made
| Decision | Options Considered | Chosen | Rationale |
|----------|-------------------|--------|-----------|
| [Decision 1] | A, B, C | B | [Why] |

## 3. User Stories

[See Phase 3 below for story writing methodology]

## 4. Design & UX

### User Flow
1. User [action] →
2. System [response] →
3. User sees [outcome]

### Wireframes/Mockups
[Link to Figma/screenshots or describe key screens]

### Edge Cases
| Scenario | Expected Behavior |
|----------|------------------|
| [Edge case 1] | [What happens] |
| [Edge case 2] | [What happens] |
| Empty state | [What user sees with no data] |
| Error state | [What user sees on failure] |
| Slow connection | [Loading behavior] |

## 5. Technical Considerations

### Architecture Notes
- [Key technical decisions]
- [New services/APIs needed]
- [Database changes]

### Dependencies
- [External service X]
- [Team Y's API]
- [Library Z]

### Performance Requirements
- Page load: <[X]ms
- API response: <[X]ms
- Concurrent users: [X]

### Security & Privacy
- [Data handling requirements]
- [Auth/permissions needed]
- [PII considerations]

## 6. Release Plan

### Rollout Strategy
- [ ] Feature flag: [flag name]
- [ ] Beta group: [who]
- [ ] % rollout: [10% → 50% → 100%]
- [ ] Rollback plan: [how]

### Launch Checklist
- [ ] QA sign-off
- [ ] Analytics events implemented
- [ ] Monitoring/alerts configured
- [ ] Documentation updated
- [ ] Support team briefed
- [ ] Stakeholders notified

## 7. Success Criteria

| Metric | Current | Target | Timeframe |
|--------|---------|--------|-----------|
| [Primary metric] | [X] | [Y] | [Z weeks] |
| [Secondary metric] | [X] | [Y] | [Z weeks] |

### Post-Launch Review
- **1-week check:** [What to look at]
- **1-month review:** [What to measure]
- **Kill/iterate decision:** [Criteria for each]

PRD Quality Rubric (score before sharing)

Dimension0-2 (Weak)3-4 (Adequate)5 (Strong)Weight
Problem clarityVague, no dataClear but thin evidenceSharp statement + multiple evidence pointsx4
Scope disciplineEverything in scopeSome boundariesExplicit in/out + "what this is NOT"x3
Story qualityVague tasksStories with some criteriaINVEST stories + verifiable acceptance criteriax4
Edge casesNone listedHappy path + 1-2 edgesComprehensive: empty, error, slow, permissions, concurrentx3
Success metrics"Improve X"Metric + targetMetric + baseline + target + timeframe + measurement methodx3
Technical feasibilityNo tech sectionArchitecture notesDependencies, performance, security, migration planx2
Release planNone"Ship it"Feature flag + rollout % + rollback + launch checklistx1

Scoring: Sum (score × weight). Max = 100.

  • 80-100: Ship-ready. Get approvals.
  • 60-79: Solid but missing pieces. Fill gaps before review.
  • 40-59: Needs work. Major sections incomplete.
  • <40: Start over or go back to discovery.

Phase 3: User Story Methodology

Story Format

story:
  id: "US-001"
  title: "" # Action-oriented: "Add priority field to tasks table"
  persona: "" # Who benefits
  narrative: "As a [persona], I want [capability] so that [benefit]"
  acceptance_criteria:
    - criterion: "" # Verifiable statement
      type: "functional" # functional | performance | security | ux
  priority: 1 # Execution order (dependencies first)
  size: "" # XS | S | M | L | XL
  status: "todo" # todo | in_progress | review | done
  notes: "" # Runtime observations
  depends_on: [] # Story IDs this depends on
  blocked_by: [] # External blockers

INVEST Checklist (every story must pass)

LetterCriterionTest
I — IndependentCan be built without other incomplete storiesNo circular dependencies
N — NegotiableDetails can flex (the "what" is fixed, the "how" is flexible)Multiple implementation approaches exist
V — ValuableDelivers user or business value on its ownWould a user/stakeholder care if only this shipped?
E — EstimableTeam can size itNo major unknowns (if unknowns exist, add a spike first)
S — SmallCompletable in one sprint (or one context window for AI agents)1-3 days of work max
T — TestableHas verifiable acceptance criteriaCan write a test for each criterion

Acceptance Criteria Rules

Good criteria are:

  • Binary (pass/fail, not subjective)
  • Specific (numbers, not adjectives)
  • Independent (testable in isolation)
❌ Bad✅ Good
"Works correctly""Returns 200 with JSON body containing id, name, status fields"
"Fast enough""API responds in <200ms at p95 with 100 concurrent users"
"User-friendly""Form shows inline validation errors within 100ms of field blur"
"Secure""Endpoint returns 403 for users without admin role"
"Handles errors""On network timeout, shows retry button + cached data if available"

Always include these universal criteria:

  • Typecheck passes (tsc --noEmit --strict) (for TypeScript projects)
  • All existing tests still pass
  • New functionality has test coverage

Story Sizing Guide

SizeScopeTimeExample
XSConfig change, copy update, env var<2 hours"Update error message text"
SSingle component/function, no new deps2-4 hours"Add date picker to form"
MFeature slice: DB + API + UI1-2 days"Add task priority with filter"
LMulti-component feature, new patterns2-3 days"Add real-time notifications"
XLToo big. Split it.

Story Ordering: The Dependency Pyramid

Always order stories bottom-up:

Level 1: Schema & Data (migrations, models, seed data)
    ↑
Level 2: Backend Logic (services, APIs, business rules)
    ↑
Level 3: Integration (API routes, auth, middleware)
    ↑
Level 4: UI Components (forms, tables, modals)
    ↑
Level 5: UX Polish (animations, empty states, loading)
    ↑
Level 6: Analytics & Monitoring (events, dashboards)

Each level depends ONLY on levels below it. Never build UI before the API exists.

Splitting Strategies

When a story is too big, split using one of these patterns:

StrategyWhen to UseExample
By layerFull-stack feature"Add schema" → "Add API" → "Add UI"
By operationCRUD feature"Create task" → "Read/list tasks" → "Update task" → "Delete task"
By personaMulti-role feature"Admin creates template" → "User fills template" → "Viewer sees results"
By happy/sad pathComplex flows"Successful payment" → "Payment declined" → "Payment timeout"
By platformCross-platform"iOS support" → "Android support" → "Web support"
Spike + implementHigh uncertainty"Spike: evaluate auth libraries (2h)" → "Implement auth with chosen library"

Phase 4: PRD for AI Coding Agents

When the PRD will be executed by AI agents (Claude Code, Cursor, Copilot Workspace, etc.), add these adaptations:

Agent-Optimized Story Format

agent_story:
  id: "US-001"
  title: "Add priority field to tasks table"
  context: |
    The tasks table is in src/db/schema.ts using Drizzle ORM.
    Priority values should be: high, medium, low (default: medium).
    See existing fields for naming conventions.
  acceptance_criteria:
    - "Add `priority` column to `tasks` table in src/db/schema.ts"
    - "Type: enum('high', 'medium', 'low'), default 'medium', not null"
    - "Generate migration: `npx drizzle-kit generate`"
    - "Run migration: `npx drizzle-kit push`"
    - "Verify: `tsc --noEmit --strict` passes"
    - "Verify: existing tests pass (`npm test`)"
  files_to_touch:
    - src/db/schema.ts
    - drizzle/ (generated migration)
  commands_to_run:
    - "npx drizzle-kit generate"
    - "npx drizzle-kit push"
    - "tsc --noEmit --strict"
    - "npm test"
  done_when: "All verify commands pass with exit code 0"

Agent-Specific Rules

  1. Be explicit about file paths. Agents can't guess your project structure.
  2. Include verification commands. Agents need a "definition of done" they can check.
  3. One context window per story. If a story needs the agent to remember more than ~50 files, it's too big.
  4. List files to touch. Reduces agent exploration time and prevents hallucination.
  5. Order matters even more. Agents execute sequentially — wrong order = compounding errors.
  6. Include the commands. Don't say "run the migration" — say npx drizzle-kit push.

Project Context File

For AI agent execution, create a PROJECT_CONTEXT.md alongside the PRD:

# Project Context

## Stack
- Framework: [Next.js 14 / Express / etc.]
- Language: [TypeScript strict mode]
- Database: [PostgreSQL via Drizzle ORM]
- Testing: [Vitest + Testing Library]
- Styling: [Tailwind CSS]

## Key Directories
- src/db/ — Database schema and migrations
- src/api/ — API routes
- src/components/ — React components
- src/lib/ — Shared utilities
- tests/ — Test files (mirror src/ structure)

## Conventions
- File naming: kebab-case
- Component naming: PascalCase
- Max file length: 300 lines
- Max function length: 50 lines
- All exports typed, no `any`

## Commands
- `npm run dev` — Start dev server
- `npm test` — Run tests
- `npm run build` — Production build
- `tsc --noEmit --strict` — Type check
- `npx drizzle-kit generate` — Generate migration
- `npx drizzle-kit push` — Apply migration

## Current State
- [What exists today relevant to the PRD]
- [Any tech debt or gotchas the agent should know]

Phase 5: Review & Approval

Review Checklist (before sharing the PRD)

Completeness:

  • Problem statement has evidence (not just opinion)
  • "What this is NOT" section exists and is specific
  • Every story has ≥3 acceptance criteria
  • Edge cases table covers: empty state, error state, permissions, concurrent access
  • Success metrics have baseline + target + timeframe
  • Technical section addresses: performance, security, dependencies

Quality:

  • No story larger than "L" (split XL stories)
  • All acceptance criteria are binary (pass/fail)
  • No circular dependencies between stories
  • Dependency pyramid ordering is correct
  • Release plan includes rollback strategy

Readability:

  • Executive summary is <3 sentences
  • Non-engineers can understand the problem section
  • Engineers can start building from the stories section alone
  • No jargon without definition

Approval Flow

Author writes PRD
    ↓
Self-review (score with rubric — must be ≥60)
    ↓
Peer review (another PM or tech lead)
    ↓
Engineering review (feasibility + sizing)
    ↓
Stakeholder approval (PM lead or product director)
    ↓
Status → Approved
    ↓
Sprint planning (stories → backlog)

Common Review Feedback (and how to fix it)

FeedbackFix
"What problem does this solve?"Your problem statement is weak. Add evidence.
"This is too big"Split into phases. Ship the smallest valuable slice first (MVP).
"How do we know it worked?"Your success metrics are vague. Add numbers + timeframe.
"What about [edge case]?"Your edge case table is incomplete. Add it.
"When does this ship?"Add timeline with milestones, not just a deadline.
"Who approved this?"Add approvers field and get explicit sign-offs.

Phase 6: Tracking & Iteration

PRD Status Lifecycle

Draft → In Review → Approved → In Progress → Shipped → Post-Launch Review
                ↓                                              ↓
            Rejected                                    Iterate / Kill

Progress Tracking

Track story completion in the PRD itself or a linked tracker:

progress:
  total_stories: 12
  done: 7
  in_progress: 2
  blocked: 1
  todo: 2
  completion: "58%"
  
  blocked_items:
    - story: "US-008"
      blocker: "Waiting for payments API access from finance team"
      since: "2025-01-15"
      escalation: "Pinged finance lead, follow up Friday"

  velocity:
    stories_per_week: 3.5
    estimated_completion: "2025-02-01"

Post-Launch Review Template

post_launch:
  shipped_date: ""
  review_date: "" # 2-4 weeks after ship

  metrics:
    primary:
      metric: ""
      baseline: ""
      target: ""
      actual: ""
      verdict: "" # hit | miss | exceeded

    secondary:
      - metric: ""
        actual: ""
        verdict: ""

  qualitative:
    user_feedback: []
    support_tickets: "" # count related to this feature
    unexpected_outcomes: []

  process_retro:
    what_went_well: []
    what_didnt: []
    estimation_accuracy: "" # actual vs estimated effort
    scope_changes: "" # what changed after approval

  decision: "" # iterate | maintain | deprecate | expand
  next_actions: []

Quick Commands

CommandWhat It Does
"Write a PRD for [feature]"Full PRD from discovery through stories
"Break this into stories"Takes a feature description → user stories
"Review this PRD"Scores against quality rubric + gives specific feedback
"Make this agent-ready"Converts PRD stories to agent-optimized format
"What's missing from this PRD?"Gap analysis against the template
"Split this story"Takes a large story → smaller INVEST-compliant stories
"Score this PRD"Quality rubric scoring with dimension breakdown
"Create project context for [project]"Generates PROJECT_CONTEXT.md for AI agent execution
"Post-launch review for [feature]"Generates review template with metrics
"Track progress"Updates completion stats from story statuses

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

axure-prototype-generator

Axure 原型代码生成器 - 输出 JavaScript 格式 HTML 代码,支持内联框架直接加载可交互原型。

Archived SourceRecently Updated
General

错敏信息检测

# 错敏检测 Skill

Archived SourceRecently Updated
General

TikTok B2B 引流台词生成器

# TikTok B2B 引流台词生成器 ## 技能描述 本 Skill 可根据您提供的产品信息和公司背景,自动生成适合 TikTok 平台的 B2B 引流视频脚本(20-50 秒),`skill.json` 文件中包含了输入参数的结构、输出格式以及用于生成台词的提示模板。脚本遵循已验证的外贸引流规律: - **真人出镜**:以第一人称(如 Anna)拉近距离 - **产品细节**:材质、颜色、MOQ、定制服务等 - **公司实力**:经验年限、自有工厂、认证等 - **客户背书**:提及已有市场国家(如巴基斯坦、埃及) - **互动引导**:清晰号召观众联系,引导至指定服务网址 支持三种风格:普通、幽默、惊喜,让您的视频内容更加多样化。 ## 输入参数 | 参数名 | 类型 | 必填 | 描述 | 示例 | |---------------------|----------|------|--------------------------------|--------------------------| | product_type | string | 是 | 产品类型 | 男士休闲鞋 | | material | string | 是 | 主要材质 | 优质 PU 皮革 | | colors | array | 是 | 颜色列表 | ["黑色","白色","棕色"] | | moq | string | 是 | 最小起订量 | 120 双(可混 2-3 色) | | customization | string | 否 | 可定制内容 | 可定制 logo | | target_markets | array | 是 | 主要市场国家 | ["巴基斯坦","埃及"] | | company_experience | string | 否 | 公司经验年数 | 15 年 | | factory_own | boolean | 否 | 是否自有工厂 | true | | extra_features | string | 否 | 其他亮点 | 免费样品 | | contact_url | string | 否 | 服务联系网址 | http://www.doumaotong.com | | style | string | 否 | 风格(普通/幽默/惊喜) | 普通 | ## 输出示例 Hi guys, this is Anna! Welcome to my showroom. Today I'm excited to show you our latest men's casual shoes – made of high-quality PU leather, very durable and comfortable. We have three colors available: black, white, and brown. MOQ is 120 pairs, and you can mix 2-3 colors. Plus, we can customize your logo on the shoes. Our shoes are already loved by customers in Pakistan, Egypt, and South Africa. With 15 years of experience and our own factory, we guarantee quality and timely delivery. We even offer free samples! If you're interested, please visit http://www.doumaotong.com to contact us. Thank you! ## 使用说明 1. 在 OpenClaw 平台安装此 Skill。 2. 调用时填写产品参数,包括 `contact_url`(默认为 http://www.doumaotong.com),即可获得定制化的 TikTok 脚本。 3. 生成的台词会在结尾处自然引导观众访问指定的服务网站。 4. 可根据实际需要调整 `style` 参数,生成不同语气的台词。 ## 文件说明 - `skill.json`:技能的机器可读定义,包含输入输出 schema 和生成提示模板。 - `SKILL.md`:技能的人类可读文档,提供详细说明和使用示例。

Archived SourceRecently Updated
General

instructional-design-cn

培训课程大纲设计、效果评估、内部分享材料生成

Archived SourceRecently Updated