product-customer-discovery

Plan and run product-focused customer discovery interviews and synthesize actionable customer insights. Use when you need to define discovery goals/hypotheses, identify an ICP/segments, create a recruiting plan, write an interview guide, conduct interviews, analyze notes (themes, JTBD, pains/gains), and produce a discovery readout with opportunities, risks, and next experiments.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "product-customer-discovery" with this command: npx skills add piperubio/ai-agents/piperubio-ai-agents-product-customer-discovery

Product Customer Discovery

Purpose

  • Reduce product/market risk by learning how target users behave today, what problems they truly have, and what they already do to solve them.
  • Turn qualitative conversations into decisions: who to build for (ICP), what to solve (problem framing), and what to test next (experiments).

Quick triggers

Use this skill when the user asks for:

  • “product customer discovery”, “user research”, “problem interviews”, “exploratory interviews”
  • “write an interview script/guide”, “what questions should I ask users”
  • “define ICP/personas/segments”, “JTBD”, “pain points”, “opportunity sizing (qual)”
  • “synthesize interview notes”, “extract themes/insights”, “create a discovery report”

Inputs to ask for (minimum)

  1. Product/service and stage (idea, MVP, growth) + decision(s) discovery must unblock
  2. Target audience hypotheses (who) and problem hypotheses (what/why)
  3. Constraints: timeline, number of interviews, geography/language, incentives, recruiting channels

Outputs (suggested)

  • Discovery plan: goals, hypotheses, target segments, recruiting criteria, timeline
  • Interview guide: opening, context questions, story prompts, probing, wrap-up
  • Synthesis: themes + evidence (quotes), JTBD/pains/gains, opportunity areas, risks/unknowns
  • Next steps: prioritized experiments (e.g., landing page, concierge test, prototype test)

Core workflow (end-to-end)

  1. Align on outcomes: confirm what decision will be made from the research and what “good evidence” looks like.
  2. Define hypotheses: write 5–10 falsifiable statements (ICP, problem, willingness, constraints, alternatives).
  3. Select participants: define inclusion/exclusion criteria, quotas across segments, and screening questions.
  4. Design the interview:
    • prefer “tell me about the last time…” over “would you use…”
    • focus on current behavior, existing alternatives, constraints, and consequences
  5. Run interviews:
    • start with rapport + consent; keep it conversational
    • ask for specific incidents; probe for frequency, severity, triggers, and workarounds
    • capture verbatims and observable facts; separate facts from interpretations
  6. Synthesize:
    • affinity-map notes into themes; label with evidence + confidence
    • map to JTBD (situation → motivation → desired outcome) and pains/gains
    • identify “strong signals” (repeated patterns, costly workarounds, high stakes)
  7. Decide & recommend:
    • rank opportunities by severity, frequency, reachable audience, and differentiation
    • propose next experiments with clear success metrics and cheapest test first
  8. Share readout: present insights, what changed vs. assumptions, open questions, and the plan.

Interview guide template (outline)

  1. Intro: who you are, purpose, confidentiality, recording consent, timebox
  2. Background: role, context, responsibilities, tools/workflow
  3. Story prompts (core): “Walk me through the last time you…”
  4. Probing:
    • triggers: “what started this?”
    • frequency: “how often?”
    • severity: “what happens if you don’t solve it?”
    • alternatives: “what did you try? why that? what did it cost?”
    • decision-making: “who’s involved? what’s the budget/approval path?”
  5. Wrap-up: biggest pain, ideal outcome, who else to talk to, follow-up permission

Quality checklist

  • Goals and hypotheses are explicit and falsifiable
  • Participant criteria and screening reduce bias (no “friends and fans” only)
  • Questions avoid leading language and future hypotheticals
  • Notes separate verbatims/facts from interpretations
  • Insights are backed by evidence, not anecdotes
  • Recommendations include next experiments and success metrics

Common mistakes (avoid)

  • Asking for feature opinions instead of behavior (“Would you use X?”)
  • Interviewing only easy-to-reach users and generalizing
  • Treating one loud quote as a “theme” without triangulation
  • Skipping the decision step (insights without a recommendation and next tests)

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

commercial-proposal-writer

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

project-planning

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

project-stewardship

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

project-stakeholder-communication

No summary provided by upstream source.

Repository SourceNeeds Review