spec-interview

Interviews users to build comprehensive project specifications. Use when starting a new project, feature, or when user needs help defining requirements through guided discovery.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "spec-interview" with this command: npx skills add cfircoo/claude-code-toolkit/cfircoo-claude-code-toolkit-spec-interview

<objective> Build complete, production-ready specifications through deep, systematic interviewing. Read existing SPEC.md if present, then probe the user about every aspect they haven't fully thought through: architecture decisions, edge cases, error states, UX flows, security concerns, performance tradeoffs, integration points, and failure modes.

The goal is to surface hidden assumptions and force decisions BEFORE implementation begins. </objective>

<essential_principles>

<principle name="non_obvious_questions"> Never ask questions the user has already answered or could trivially answer. Dig into: - What happens when X fails? - How does this interact with Y? - What's the migration path from current state? - Who's responsible when Z goes wrong? - What does success look like in 6 months? </principle> <principle name="progressive_depth"> Start broad, then drill into areas of uncertainty. When user gives vague answers, probe deeper. When they're confident, move on. Detect hesitation and explore it. </principle> <principle name="tradeoff_forcing"> Don't let users have everything. Force explicit tradeoffs: - "You mentioned both X and Y. These conflict because... which matters more?" - "This approach optimizes for A but sacrifices B. Is that acceptable?" </principle> <principle name="completeness_over_speed"> Continue interviewing until EVERY section of the spec template has concrete answers. Vague sections = more questions. Only stop when the spec is implementation-ready. </principle>

</essential_principles>

<quick_start>

  1. Check if SPEC.md exists and read it
  2. Identify gaps, ambiguities, and untested assumptions
  3. Begin interviewing using AskUserQuestionTool
  4. Cover ALL domains systematically (see question_domains)
  5. Write completed spec to SPEC.md </quick_start>
<process> <step name="1_load_context"> **Read existing spec if present:** ``` Read SPEC.md (or specified file path) ```

If exists: Analyze what's defined vs. what's missing or vague. If not: Start fresh, but ask about existing context (related systems, constraints, prior art). </step>

<step name="2_initial_assessment"> Before diving deep, establish scope with 2-3 broad questions: - What problem are you solving and for whom? - What's the minimal viable version vs. the full vision? - What constraints exist (time, tech stack, team, budget)? </step> <step name="3_systematic_interview"> **Interview through ALL domains below.** Use AskUserQuestionTool with 2-4 targeted questions per round. Mix domains to keep conversation dynamic.

CRITICAL: Each question must:

  • Be specific to THIS project (not generic)
  • Surface a decision or assumption
  • Have meaningful, distinct options
  • Force the user to commit to something </step>
<step name="4_gap_detection"> After each answer round, identify: - New questions raised by the answer - Contradictions with earlier answers - Areas where user seemed uncertain

Probe these immediately before moving on. </step>

<step name="5_write_spec"> When all domains are covered and no ambiguities remain: 1. Use the template in `templates/spec-template.md` 2. Fill every section with concrete decisions 3. Mark any remaining open questions explicitly 4. Write to SPEC.md (or user-specified path) </step> </process>

<question_domains>

<domain name="problem_and_users"> **Surface hidden assumptions about the problem:** - What's the actual pain point? (not the solution they've imagined) - Who are the real users? (roles, technical level, frequency of use) - What do users do TODAY without this? (current workarounds) - What would make users NOT use this? (adoption blockers) - How will you know if this succeeded? (measurable outcomes) </domain> <domain name="scope_and_boundaries"> **Force explicit scope decisions:** - What's explicitly OUT of scope? (as important as what's in) - What's the MVP vs. v2 vs. "nice to have someday"? - What adjacent problems are you intentionally NOT solving? - What happens if scope must be cut by 50%? What survives? </domain> <domain name="architecture_and_technical"> **Probe technical decisions and their implications:** - What's the data model? What are the core entities and relationships? - Where does state live? (client, server, database, cache) - What's the source of truth for X? (when there's duplication) - How does this scale? (10x users, 100x data, distributed team) - What's the deployment model? (self-hosted, SaaS, hybrid) - What's the upgrade/migration path from v1 to v2? - What technical debt are you knowingly taking on? </domain> <domain name="integration_and_dependencies"> **Map the system boundaries:** - What external systems does this touch? (APIs, databases, services) - What happens when dependency X is down? - Who owns the integration contracts? How do they change? - What data flows in and out? What's the format/protocol? - Are there rate limits, quotas, or cost implications? </domain> <domain name="ui_and_ux"> **Get specific about user experience:** - Walk through the primary user journey step-by-step - What's the first thing a new user sees/does? - How does the user recover from mistakes? - What feedback does the user get at each step? - What's the mobile/responsive story? - What accessibility requirements exist? - How does this look with 0 items? 1 item? 1000 items? </domain> <domain name="error_states_and_edge_cases"> **Surface failure modes:** - What happens when network fails mid-operation? - What if the user does X twice rapidly? - What if data is malformed or missing fields? - What's the worst thing that could happen? How do we prevent it? - What does partial failure look like? (3 of 5 items succeed) - How do users know something went wrong? - What's the retry/recovery mechanism? </domain> <domain name="security_and_privacy"> **Force security decisions:** - What data is sensitive? What's the classification? - Who can see/edit/delete what? (permission model) - How is authentication handled? (existing system? new?) - What audit trail is required? - What compliance requirements exist? (GDPR, SOC2, HIPAA) - What happens to data when user/account is deleted? </domain> <domain name="performance_and_reliability"> **Establish non-functional requirements:** - What response times are acceptable? (p50, p95, p99) - What's the availability target? (99%, 99.9%, 99.99%) - What's the expected load? (requests/sec, concurrent users) - What happens under load? (graceful degradation vs. hard failure) - What's the data retention policy? - What's the backup/recovery strategy? </domain> <domain name="operations_and_maintenance"> **Think about day 2:** - How will this be monitored? What alerts exist? - How do you debug when something goes wrong? - What does deployment look like? (CI/CD, manual, hybrid) - Who's on-call? What's the escalation path? - How is configuration managed? (env vars, config files, admin UI) - What's the rollback plan? </domain> <domain name="testing_and_quality"> **Define quality gates:** - What must be tested? (unit, integration, e2e) - What's the test data strategy? - How do you test integrations with external systems? - What's the acceptance criteria for "done"? - Who approves releases? </domain> <domain name="timeline_and_phases"> **Reality-check the plan:** - What's driving the timeline? (hard deadline, soft goal, ASAP) - What's the phased rollout plan? - What's the feature flag strategy? - What can be parallelized? What's serial? - What are the riskiest parts that need prototyping? </domain> <domain name="verification_environment"> **Gather runtime verification info (critical for Ralph autonomous execution):** - What's the tech stack? (framework, language, package manager) - How do you start the dev server? What port does it run on? - What database is used? How do you connect and query it directly? - What test frameworks are set up? (Jest, Pytest, Playwright, Cypress, etc.) - Are there existing e2e tests? What runner and how to execute them? - What's the typecheck command? Lint command? Build command? - Are there health check endpoints? - How do you currently verify a feature works? (manual steps we can automate) - What CI checks currently run? - What ORM/migration tool is used? (Prisma, Alembic, Drizzle, etc.) </domain>

</question_domains>

<interview_techniques>

<technique name="probing_vague_answers"> When user says "it depends" or "we'll figure it out later": - "What specifically does it depend on?" - "What would need to be true for option A vs. option B?" - "If you had to decide RIGHT NOW, which way would you lean?" </technique> <technique name="revealing_assumptions"> When user says something confidently: - "What would change if [assumption] turned out to be wrong?" - "How would you verify that [assumption] is true before building?" - "Have you seen this work elsewhere? What was different?" </technique> <technique name="forcing_priorities"> When everything seems important: - "If you could only ship ONE of these, which one?" - "What would you cut if timeline was halved?" - "Which of these would you be embarrassed NOT to have?" </technique> <technique name="exploring_conflict"> When two answers seem incompatible: - "Earlier you said X, but this suggests Y. How do these reconcile?" - "This creates a tradeoff between A and B. Where do you land?" </technique>

</interview_techniques>

<success_criteria> Interview is complete when:

  • All domains have been covered with project-specific questions
  • User has made explicit decisions on all tradeoffs
  • No "TBD" or "we'll figure it out" remains in critical areas
  • Edge cases and failure modes have concrete handling strategies
  • The spec could be handed to a developer who would know what to build
  • User confirms "this is complete enough to start building" </success_criteria>

<spec_template_location> See templates/spec-template.md for the output structure. </spec_template_location>

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

sqlalchemy-postgres

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

generate-prd

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

pytest-best-practices

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

create-plans

No summary provided by upstream source.

Repository SourceNeeds Review
spec-interview | V50.AI