product-analyst

Expert product analytics strategist for SaaS and digital products. Use when designing product metrics frameworks, funnel analysis, cohort retention, feature adoption tracking, A/B testing, experimentation design, data instrumentation, or product dashboards. Covers AARRR, HEART, behavioral analytics, and impact measurement.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "product-analyst" with this command: npx skills add ncklrs/startup-os-skills/ncklrs-startup-os-skills-product-analyst

Product Analyst

Strategic product analytics expertise for data-driven product decisions — from metrics framework selection to experimentation design and impact measurement.

Philosophy

Great product analytics isn't about tracking everything. It's about measuring what matters to drive better product decisions.

The best product analytics:

  1. Start with decisions, not data — What will you do differently based on this metric?
  2. Instrument once, measure forever — Invest in solid event tracking upfront
  3. Balance leading and lagging — Predict outcomes, don't just report them
  4. Make data accessible — Self-serve dashboards beat SQL queues
  5. Experiment before you ship — Validate hypotheses with real users

How This Skill Works

When invoked, apply the guidelines in rules/ organized by:

  • metrics-* — Frameworks (AARRR, HEART), KPI selection, metric hierarchies
  • funnel-* — Conversion analysis, drop-off diagnosis, optimization
  • cohort-* — Retention analysis, segmentation, lifecycle tracking
  • feature-* — Adoption tracking, usage patterns, feature success
  • experiment-* — A/B testing, hypothesis design, statistical rigor
  • instrumentation-* — Event tracking, data modeling, collection best practices
  • dashboard-* — Visualization, stakeholder reporting, self-serve analytics

Core Frameworks

AARRR (Pirate Metrics)

StageQuestionKey Metrics
AcquisitionWhere do users come from?Traffic sources, CAC, signup rate
ActivationDo they have a great first experience?Time-to-value, setup completion, aha moment
RetentionDo they come back?DAU/MAU, D1/D7/D30 retention, churn
RevenueDo they pay?Conversion rate, ARPU, LTV
ReferralDo they tell others?NPS, referral rate, viral coefficient

HEART Framework (Google)

DimensionDefinitionSignal Types
HappinessUser attitudes, satisfactionNPS, CSAT, surveys
EngagementDepth of involvementSessions, time-in-app, actions/session
AdoptionNew users/features uptakeNew users, feature adoption %
RetentionContinued usage over timeRetention curves, churn rate
Task SuccessEfficiency and completionTask completion, error rate, time-on-task

The Metrics Hierarchy

                    ┌─────────────────┐
                    │   North Star    │  ← Single metric that matters most
                    │     Metric      │
                    ├─────────────────┤
                    │    Primary      │  ← 3-5 key performance indicators
                    │      KPIs       │
                    ├─────────────────┤
                    │   Supporting    │  ← Diagnostic and health metrics
                    │    Metrics      │
                    ├─────────────────┤
                    │   Operational   │  ← Day-to-day tracking
                    │    Metrics      │
                    └─────────────────┘

Retention Analysis Types

┌───────────────────────────────────────────────────────────┐
│                    RETENTION VIEWS                        │
├───────────────────────────────────────────────────────────┤
│  N-Day Retention    │  % who return on exactly day N      │
│  Unbounded          │  % who return on or after day N     │
│  Bracket Retention  │  % who return within a time window  │
│  Rolling Retention  │  % still active after N days        │
└───────────────────────────────────────────────────────────┘

Experimentation Rigor Ladder

LevelApproachWhen to Use
1. GutShip and hopeNever for important features
2. QualitativeUser research, feedbackEarly exploration
3. ObservationalPre/post analysisLow-risk changes
4. Quasi-experimentCohort comparisonWhen randomization hard
5. A/B TestRandomized controlOptimization, validation
6. Multi-arm BanditAdaptive allocationWhen speed > precision

Metric Selection Criteria

CriterionQuestionGood Sign
ActionableCan we influence this?Direct lever exists
AccessibleCan we measure it reliably?<5% missing data
AuditableCan we debug anomalies?Clear calculation logic
AlignedDoes it tie to business value?Executive cares
AttributableCan we trace changes to causes?A/B testable

Anti-Patterns

  • Vanity metrics — Tracking what looks good, not what drives decisions
  • Metric overload — 50 dashboards, zero insights
  • Lagging only — Measuring outcomes without predictive indicators
  • Silent failures — No alerting on data quality issues
  • HiPPO-driven — Highest-paid person's opinion beats data
  • P-hacking — Running tests until you get significance
  • Ship and forget — Launching features without success criteria
  • Segment blindness — Looking only at averages, missing cohort differences

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

proposal-writer

No summary provided by upstream source.

Repository SourceNeeds Review
254-ncklrs
General

website-copy-specialist

No summary provided by upstream source.

Repository SourceNeeds Review
119-ncklrs
General

remotion-animation

No summary provided by upstream source.

Repository SourceNeeds Review
119-ncklrs
General

seo-content-strategist

No summary provided by upstream source.

Repository SourceNeeds Review
117-ncklrs