game-design-kpi-coverage-audit

Audit a game feature, roadmap candidate, UX improvement, support system, connective-tissue feature, or quality-of-life change for KPI coverage bias and measurement blind spots. Use when a team is overvaluing only directly attributable metrics, neglecting foundational work that is hard to measure, or struggling to justify features whose value is real but not cleanly tied to one headline KPI.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "game-design-kpi-coverage-audit" with this command: npx skills add stanestane/game-design-kpi-coverage-audit

Game Design KPI Coverage Audit

Check whether the evaluation framework is seeing the whole value of the feature, or only the parts that are easy to measure.

Use this skill when a feature is being judged mainly through directly attributable KPIs and you suspect that measurement logic is biasing the team toward flashy, self-contained systems while undervaluing connective tissue, UX, quality-of-life, enabling systems, or long-term structural work.

Read references/value-types.md when identifying what kind of value the feature creates. Read references/blind-spot-patterns.md when diagnosing common KPI-coverage failures. Read references/recommendation-patterns.md when deciding how to justify or evaluate hard-to-measure work.

What to produce

Produce:

  1. Feature read - what the proposal is and what role it plays
  2. Current KPI framing - what the team is measuring or expecting to measure
  3. Coverage diagnosis - what value is covered versus ignored by those KPIs
  4. Blind spots - what may be neglected because it is hard to measure directly
  5. Risk of mis-prioritization - what bad decisions may result from the current framing
  6. Evaluation recommendation - how the feature should be justified, monitored, or compared more fairly

Process

1. Identify the role of the feature

Clarify whether the proposal is mainly:

  • a standalone engagement feature
  • a monetization feature
  • a progression layer
  • a UX improvement
  • connective tissue between systems
  • quality-of-life work
  • support infrastructure for future features
  • a clarity, pacing, or usability improvement

2. Identify the current KPI story

Ask:

  • what metric is the team using to justify this feature?
  • is it tied to revenue, retention, engagement, conversion, economy balance, sentiment, or something else?
  • is the KPI direct, indirect, speculative, or absent?

3. Audit KPI coverage

Check whether the metric framing captures the actual value of the work. Look for value types such as:

  • direct monetization
  • direct engagement lift
  • retention support
  • reduced friction
  • improved comprehension
  • stronger connective tissue between systems
  • future feature enablement
  • long-term sustainability
  • reduced support burden or balancing burden

4. Identify blind spots

Common signs:

  • the feature is dismissed because it cannot move one headline KPI on its own
  • direct-revenue features are always favored over structural health
  • UX work is treated as optional because it lacks clean attribution
  • foundational work is postponed until crisis
  • enabling systems are undervalued because they mostly improve the performance of other features

5. Judge prioritization risk

Ask:

  • what happens if this feature is judged only by direct KPI lift?
  • is the team likely to underinvest in maintenance, UX, clarity, infrastructure, or connective tissue?
  • could the current framework systematically reward short-term visible wins over long-term health?

6. Recommend better evaluation

Possible moves:

  • use a mixed scorecard instead of one KPI
  • classify the feature as enabling or connective work rather than forcing a fake direct KPI
  • evaluate via downstream support of other systems
  • allocate protected capacity for high-value, hard-to-measure work
  • compare opportunity cost honestly rather than pretending everything must tie to one direct metric

Response structure

Feature Read

  • ...

Current KPI Framing

  • ...

Coverage Diagnosis

  • ...

Blind Spots

  • ...

Risk of Mis-Prioritization

  • ...

Recommendation

  • ...

Fast mode

  • What is this feature actually for?
  • What KPI is being used to justify it?
  • What important value is not being captured by that KPI?
  • What bad prioritization decision could this cause?
  • How should the team evaluate it more fairly?

Style rules

  • Do not dismiss KPIs; diagnose their limits.
  • Do not invent fake measurable certainty for support work.
  • Distinguish direct value from enabling value.
  • Prefer fairer framing over anti-metrics rhetoric.
  • Be specific about how blind spots distort roadmap decisions.

Working principle

Teams often prioritize what they can measure cleanly, not what matters most. Use this skill to expose where KPI logic is too narrow for the actual design value on the table.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Tophant Clawvault Operator

Operate ClawVault services, configuration, vault presets, and scanning from OpenClaw

Registry SourceRecently Updated
Security

review-sendmsg

Perform detailed Python code reviews identifying bugs, security risks, test gaps, and maintainability issues in diffs, patches, or pull requests.

Registry SourceRecently Updated
Security

Trent OpenClaw Security Assessment

Assess your Agent deployment against security risks using Trent.

Registry SourceRecently Updated
Security

X402 Cfo

Financial brain for x402 payments — budget enforcement, cost policies, spend analytics, anomaly detection, and audit trail for autonomous agents.

Registry SourceRecently Updated