eval-guidance-actionability

Score assistant responses for guidance & actionability on a strict 1-5 scale, then return strict JSON only with dimension, score, rationale, and improvement suggestions. Use when the user asks to evaluate how actionable, helpful, or step-by-step a response is.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "eval-guidance-actionability" with this command: npx skills add whitespectre/ai-assistant-evals/whitespectre-ai-assistant-evals-eval-guidance-actionability

Eval Guidance & Actionability

Use this skill to evaluate whether an assistant response provides clear, usable guidance the user can act on.

Inputs

Require:

  • The assistant response text to evaluate.
  • (Optional) The user’s request or goal (helps judge whether guidance matches what’s needed).

Internal Rubric (1–5)

5 = Provides concrete, actionable steps; prioritized; includes key details/constraints; user could execute without guessing
4 = Mostly actionable; minor missing details or ordering, but still usable
3 = Some guidance, but generic; missing important steps/details; requires user to infer next actions
2 = Largely non-actionable; mostly high-level advice; lacks steps or specifics
1 = No usable guidance; purely vague, deflective, or irrelevant to “what to do next”

Workflow

  1. Check whether the response includes specific next actions (steps, checklist, examples, decision points).
  2. Check completeness (missing prerequisites, constraints, caveats).
  3. Score on a 1-5 integer scale using the rubric only.
  4. Write concise rationale tied directly to rubric criteria.
  5. Produce actionable suggestions that improve actionability.

Output Contract

Return JSON only. Do not include markdown, backticks, prose, or extra keys.

Use exactly this schema:

{ "dimension": "guidance_actionability", "score": 1, "rationale": "...", "improvement_suggestions": [ "..." ] }

Hard Rules

  • dimension must always equal "guidance_actionability".
  • score must be an integer from 1 to 5.
  • rationale must be concise (max 3 sentences).
  • Do not include step-by-step reasoning.
  • improvement_suggestions must be a non-empty array of concrete edits.
  • Never output text outside the JSON object.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

eval-accuracy

No summary provided by upstream source.

Repository SourceNeeds Review
General

eval-relevance

No summary provided by upstream source.

Repository SourceNeeds Review
General

eval-session-scorecard

No summary provided by upstream source.

Repository SourceNeeds Review
General

eval-conversation-flow

No summary provided by upstream source.

Repository SourceNeeds Review