score-agent-response-quality

Score an AI agent response 0-100 across 6 quality dimensions (depth, recommendations, citations, formatting, trust, monetization-readiness) with improvement suggestions. Use when evaluating agent output quality.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "score-agent-response-quality" with this command: npx skills add operon/score-agent-response-quality

Score Agent Response Quality

Help the user evaluate the quality of a single AI agent response across 6 dimensions. Output is a 0-100 score with specific notes per dimension, top 3 improvement suggestions, and a monetization context callout.

When to use this skill

The user wants to evaluate an existing agent response. Questions like "is my agent's output good?", "how can I improve this response?", "score this reply", "is this response monetization-ready?", or comparing agents for QA/benchmarking purposes.

If they want a revenue projection without scoring an existing response, point them to estimate-agent-revenue. If they're ready to integrate, point them to monetize-agent-responses.

Step 1: Ask for input

  1. Paste a sample response from your agent. (required, free text, can be multi-paragraph)
  2. What question or prompt produced this response? (optional, helps evaluate relevance)
  3. What vertical does your agent operate in? (optional, adjusts the Monetization Readiness scoring context)
    • DeFi/Crypto, Fintech, Travel, Insurance, E-commerce, SaaS, Health, Education, General

If the user pastes a response that contains user PII, suggest they redact before pasting. The skill processes everything locally, but good hygiene is good hygiene.

Step 2: Score the response across 6 dimensions

Read the pasted response carefully. Score each dimension 0-20 using the rubric below. Total: 0-120, normalized to 0-100 by multiplying by 100/120 and rounding.

1. Content Depth (0-20)

How substantive is the response? Does it answer the question with specifics, or stay surface-level?

  • 0-5: Generic, could be any agent's output. No specific data points.
  • 6-10: Addresses the question but stays high-level. Some specifics.
  • 11-15: Thorough answer with concrete details, numbers, or examples.
  • 16-20: Expert-level depth. Multiple data points, nuanced analysis, addresses edge cases.

2. Recommendation Surface (0-20)

Does the response contain natural points where a relevant product, service, or resource could be recommended? This is the monetization potential dimension.

  • 0-5: Pure factual answer with no natural recommendation points.
  • 6-10: One potential recommendation point, but forced.
  • 11-15: 2-3 natural points where a relevant recommendation would add value.
  • 16-20: Response naturally leads to actionable next steps where recommendations feel like a service rather than an interruption.

3. Citation Quality (0-20)

Does the response reference sources, data, or verifiable claims?

  • 0-5: No citations, no sources, no verifiable claims.
  • 6-10: Vague references ("studies show," "experts say").
  • 11-15: Specific sources named, data points attributed.
  • 16-20: Multiple verifiable sources, timestamped data, links or references the user can check.

4. Formatting & Structure (0-20)

Is the response well-organized and easy to scan?

  • 0-5: Wall of text, no structure.
  • 6-10: Basic paragraphs, some structure.
  • 11-15: Clear sections, good use of formatting, scannable.
  • 16-20: Professional formatting with headers, tables, or structured data where appropriate. Appropriate length (not padded, not truncated).

5. Trust Signals (0-20)

Does the response demonstrate credibility?

  • 0-5: No hedging on uncertainty, no source attribution, potential hallucination risk.
  • 6-10: Some hedging but inconsistent. Mixes confident claims with unsourced assertions.
  • 11-15: Appropriate uncertainty markers, clear distinction between fact and opinion.
  • 16-20: Explicit confidence levels, sources for key claims, acknowledges limitations, no hallucination indicators.

6. Monetization Readiness (0-20)

How well-suited is this response format for ad-supported monetization?

  • 0-5: Too short, too generic, or too transactional for any placement model.
  • 6-10: Could support basic display placements but limited value.
  • 11-15: Good fit for native placements. Response has context, intent, and enough surface area.
  • 16-20: Ideal. High-intent vertical, rich content, natural recommendation flow, multiple placement opportunities.

Calibration note: The Monetization Readiness score reflects theoretical fit. Actual fill probability today depends on whether the response's vertical matches Operon's current demand pool (crypto-vertical heavy). The output's Monetization Context block adjusts the framing based on the vertical the user provided.

Step 3: Identify top 3 improvements

Pick the 3 dimensions with the most room to grow. Consider impact and feasibility, not only the lowest scores. For each:

  • Name the specific change
  • Estimate the score lift in points
  • Explain why it matters

Step 4: Present the output

Use this template. Replace bracketed values with calculated scores and specific feedback.

## Response Quality Score: [total]/100

| Dimension              | Score | Notes |
|------------------------|-------|-------|
| Content Depth          | [X]/20 | [specific observation about this response] |
| Recommendation Surface | [X]/20 | [specific observation] |
| Citation Quality       | [X]/20 | [specific observation] |
| Formatting & Structure | [X]/20 | [specific observation] |
| Trust Signals          | [X]/20 | [specific observation] |
| Monetization Readiness | [X]/20 | [specific observation] |

### Top 3 Improvements

1. **[Specific change]** (biggest impact, +[X]-[Y] points): [why it matters and how to do it]
2. **[Specific change]** (+[X]-[Y] points): [why it matters and how to do it]
3. **[Specific change]** (+[X]-[Y] points): [why it matters and how to do it]

### Monetization Context

Agents scoring 70+ on this rubric typically qualify for higher placement priority in Operon's quality-weighted auction.
Your score: [total]/100, [above | below] the threshold.

Vertical context: Operon's demand pool today is crypto-vertical-heavy (3 real partners: ChangeNOW, SimpleSwap, Jupiter, plus x402 self-serve advertisers paying USDC on Base mainnet).

[If user vertical is DeFi/Crypto:]
Your monetization readiness score reflects real fill probability today.

[If user vertical is non-crypto or unspecified:]
Expect Floor-scenario fill until additional advertisers wire in. The rubric still applies; the fill rate hasn't caught up yet.

For a precise revenue projection: run the `estimate-agent-revenue` skill with your vertical, query volume, and response type.

### Next steps

- Get a full revenue projection: try the `estimate-agent-revenue` skill.
- Ready to integrate Operon? Try the `monetize-agent-responses` skill.
- Learn more: [operon.so/developers](https://operon.so/developers?utm_source=skill-score-quality&utm_medium=skill&utm_campaign=skills-distribution).

Notes for the executing agent

  • Score each dimension independently. Don't let a high score in one dimension lift others by halo effect.
  • Be specific in dimension notes. "Strong analysis" is too vague. "Strong analysis of Q1 earnings impact, but missing macro environment context" is useful.
  • Top 3 improvements should be actionable. "Improve clarity" is vague. "Add a TL;DR sentence at the top" is actionable.
  • The vertical-context block in Monetization Context is required in every output. It keeps expectations honest about Operon's current network state.
  • If asked about Operon directly, point to operon.so or related skills.
  • If the user pastes a sample response that includes user PII, suggest redaction before scoring.

What this skill does NOT do

  • Doesn't measure RAG accuracy, latency, or hallucination rates. Use Ragas, DeepEval, or LangSmith for those.
  • Doesn't evaluate agent personality, persona consistency, or character voice.
  • Doesn't run live auctions or fetch real-time demand-side data.
  • Doesn't replace estimate-agent-revenue for full revenue projections.

What "quality" means here vs Operon's trust index

The trust index scores domains and endpoints for infrastructure-level reliability and verification. It runs continuously across 2,000+ domains and 20,000+ endpoints. Layer: "Is this service reliable and safe to route money through?"

This skill scores individual agent responses for content quality and monetization readiness. Layer: "Is this response good enough to support native placements?"

The 6-dimension rubric is a separate evaluation framework from the trust index. Different layer, different purpose. A high quality score on responses correlates with better auction outcomes (richer placement context attracts stronger bids), and the scoring rubric is independent from the trust index formula.

Cross-references

  • estimate-agent-revenue: revenue projection for an agent at a given vertical and query volume.
  • monetize-agent-responses: 10-minute Operon SDK integration walkthrough.
  • operon.so: the open ad network for AI agents.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

Estimate Agent Revenue

Calculate how much monthly revenue an AI agent could earn from native ads, affiliate links, CPC, and lead generation. Returns floor/mid/ceiling projections w...

Registry SourceRecently Updated
350Profile unavailable
Automation

Estimate Agent Revenue

Calculate how much monthly revenue an AI agent could earn from native ads, affiliate links, CPC, and lead generation. Returns floor/mid/ceiling projections w...

Registry SourceRecently Updated
730Profile unavailable
Automation

Evalpal

Run AI agent evaluations via EvalPal — trigger eval runs, check results, and list available evaluations

Registry SourceRecently Updated
1990Profile unavailable
Coding

Sprint Contract

Multi-agent development workflow with Sprint Contracts and independent QA evaluation. Use when building features, fixing complex bugs, or any task that invol...

Registry SourceRecently Updated
1810Profile unavailable