gap-analysis

Product and feature evaluation. Use when (1) Evaluating product/feature feasibility and market viability (2) Assessing product-market fit before investment (3) Comparing opportunities for roadmap prioritization (4) Competitive analysis to identify gaps (5) User asks "should we build X?" or "is this viable?" (6) Risk assessment for product decisions

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "gap-analysis" with this command: npx skills add jkappers/agent-skills/jkappers-agent-skills-gap-analysis

Gap Analysis

Evaluate product and feature ideas using structured frameworks. Produce actionable recommendations with evidence.

1. Gather Context

Collect before proceeding. Do not analyze with incomplete information.

InputRequired Information
IdeaProblem solved, proposed solution
UsersTarget segment, estimated count, current behavior
AlternativesExisting solutions, workarounds, competitors
Success criteriaMeasurable outcomes defining success
ConstraintsBudget, timeline, team capabilities, technology limits

Ask clarifying questions for any missing inputs.

2. DVFI Assessment

Score each dimension 1-5. Require evidence for each score.

DimensionCore QuestionEvidence Sources
DesirabilityDo users want this?Problem frequency, pain severity, active solution-seeking, willingness to pay
ViabilityDoes the business case work?Unit economics (LTV:CAC ≥3:1), margin structure, strategic alignment
FeasibilityCan the team build this?Technical capability, resource availability, timeline realism
IntegrityShould this exist?Ethics, regulatory compliance, societal impact, brand risk

Scoring Scale

ScoreCriteria
5Strong evidence, low risk, clear path forward
4Solid evidence, minor concerns with known mitigations
3Mixed signals, material uncertainties requiring validation
2Weak evidence, significant concerns, major unknowns
1Red flags, likely blockers, insufficient evidence to proceed

3. Deep-Dive Triggers

Apply additional frameworks when DVFI scores indicate risk.

ConditionActionReference
Desirability < 4Run validation methods (MVP types, interviews, JTBD)references/validation.md
Feasibility < 4Apply TELOS framework (Technical, Economic, Legal, Operational, Schedule)references/feasibility.md
Viability < 4Calculate TAM/SAM/SOM with bottom-up validationreferences/market-sizing.md

4. Risk Assessment

Calculate: Risk Score = Likelihood (1-5) × Impact (1-5)

ScoreLevelRequired Action
1-4LowDocument and monitor
5-12MediumDefine mitigation plan before proceeding
13-25HighResolve or accept at executive level before proceeding

Evaluate risks in each category: Market, Technical, Financial, Operational, Legal/Regulatory, Competitive.

5. Prioritization

Use when comparing multiple opportunities. Default to RICE scoring.

RICE = (Reach × Impact × Confidence) / Effort

FactorDefinitionValues
ReachUsers affected per quarterActual count
ImpactEffect magnitude3=Massive, 2=High, 1=Medium, 0.5=Low, 0.25=Minimal
ConfidenceEstimate certainty100%=High, 80%=Medium, 50%=Low
EffortWork requiredPerson-months

For alternative frameworks (ICE, Kano, MoSCoW), see references/prioritization.md.

6. Competitive Analysis

Include when competitors exist in the space.

Gap TypeDefinitionStrategic Implication
AdvantageWe leadDefend and extend
ParityCompetitors have, we lackClose gap to compete
OpportunityNo one hasFirst-mover potential
InvestmentRequires major effortEvaluate ROI carefully

Map features using: ● Full support, ◐ Partial, ○ None

7. Recommendation

Conclude with one of four verdicts.

VerdictCriteriaNext Action
GOAll DVFI ≥ 4, no high risksProceed to implementation planning
CONDITIONAL GOMixed scores with addressable gapsProceed after specified conditions met
PIVOTCore value exists, current approach flawedRedesign with specific changes
NO GOBlockers in ≥2 dimensions or unmitigable high riskArchive learnings, do not proceed

Every recommendation includes:

  • Key findings summary (3-5 bullets)
  • Critical assumptions that must hold true
  • Specific next steps with owners
  • Success metrics to track post-launch

Output Format

## Gap Analysis: [Idea Name]

### Executive Summary
[Verdict] - [2-sentence rationale with key evidence]

### DVFI Assessment
| Dimension | Score | Evidence | Concerns |
|-----------|-------|----------|----------|
| Desirability | X/5 | [specific data points] | [or "None"] |
| Viability | X/5 | [specific data points] | [or "None"] |
| Feasibility | X/5 | [specific data points] | [or "None"] |
| Integrity | X/5 | [specific data points] | [or "None"] |

### Key Risks
| Risk | Score | Mitigation |
|------|-------|------------|
| [Risk 1] | L×I=X | [Specific action] |

### Competitive Position
[Include only if competitors exist]

### Recommendation
**[GO / CONDITIONAL GO / PIVOT / NO GO]**

Conditions (if applicable): [specific requirements]

### Next Steps
1. [Action] - Owner: [name] - By: [date/milestone]

Reference Thresholds

Product-Market Fit

MetricPass Threshold
Sean Ellis Test≥40% "very disappointed"
LTV:CAC ratio≥3:1
DAU/MAU (SaaS)≥20%
NPS≥30 (acceptable), ≥50 (strong)

Validation Tests

MethodPass Threshold
Landing page signup≥10% conversion
Pre-order/deposit≥5% conversion
User interviews8/10 express strong intent
Fake door test≥3x baseline CTR

Failure Pattern Data (CB Insights)

CauseFrequency
No market need42%
Lack of product-market fit34%
Overall new product failure rate70-80%

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

gap-analysis

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

dotnet-dockerfile

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

react-dockerfile

No summary provided by upstream source.

Repository SourceNeeds Review