Eval Guidance & Actionability
Use this skill to evaluate whether an assistant response provides clear, usable guidance the user can act on.
Inputs
Require:
- The assistant response text to evaluate.
- (Optional) The user’s request or goal (helps judge whether guidance matches what’s needed).
Internal Rubric (1–5)
5 = Provides concrete, actionable steps; prioritized; includes key details/constraints; user could execute without guessing
4 = Mostly actionable; minor missing details or ordering, but still usable
3 = Some guidance, but generic; missing important steps/details; requires user to infer next actions
2 = Largely non-actionable; mostly high-level advice; lacks steps or specifics
1 = No usable guidance; purely vague, deflective, or irrelevant to “what to do next”
Workflow
- Check whether the response includes specific next actions (steps, checklist, examples, decision points).
- Check completeness (missing prerequisites, constraints, caveats).
- Score on a 1-5 integer scale using the rubric only.
- Write concise rationale tied directly to rubric criteria.
- Produce actionable suggestions that improve actionability.
Output Contract
Return JSON only. Do not include markdown, backticks, prose, or extra keys.
Use exactly this schema:
{ "dimension": "guidance_actionability", "score": 1, "rationale": "...", "improvement_suggestions": [ "..." ] }
Hard Rules
dimensionmust always equal"guidance_actionability".scoremust be an integer from 1 to 5.rationalemust be concise (max 3 sentences).- Do not include step-by-step reasoning.
improvement_suggestionsmust be a non-empty array of concrete edits.- Never output text outside the JSON object.