output-wrong-task

The model produces correct-looking output that addresses a different task than the one requested — typically a related but distinct interpretation of an ambiguous prompt.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "output-wrong-task" with this command: npx skills add mvogt99/output-wrong-task

output-wrong-task

The output is well-formed and internally coherent but answers the wrong question. The model resolved an ambiguous prompt toward the most common interpretation rather than the one the user intended, or it latched onto a salient keyword and addressed that instead of the full request. The result can look convincing enough to pass a quick read.

Symptoms

  • The deliverable matches the topic of the request but misses its purpose — e.g., "explain this function" gets documentation instead of the debugging analysis asked for.
  • A code task produces something runnable but solving a simpler or adjacent problem than specified.
  • The model answers the first clause of a multi-part question and silently drops the rest.
  • The output would be correct for a different, more common prompt that shares keywords with this one.
  • Asking the model to verify what it just did reveals that it believed it was solving a different problem.

What to do

  • Restate the concrete deliverable, not just the topic. Instead of "help me with authentication," say "write a middleware function that checks for a valid JWT in the Authorization header and returns 401 if missing or invalid — nothing else."
  • Break compound tasks apart. If the prompt has multiple independent requirements, submit them one at a time and verify each before continuing.
  • Anchor the output format explicitly. Specifying the expected structure (function signature, JSON schema, number of steps, file to modify) gives the model less room to substitute a related but wrong output.
  • Before accepting the output, map it back to the original requirement: does this output satisfy the stated goal, not just a plausible-sounding version of it?
  • If the wrong-task output keeps recurring on the same prompt, the prompt likely has a latent ambiguity. Identify which interpretation the model chose and add a clause that explicitly rules it out.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

From Video

Skip the learning curve of professional editing software. Describe what you want — trim the silences, add subtitles, and export a clean version — and get edi...

Registry SourceRecently Updated
General

boss直聘自动化(无GUI)

基于 pyautogui 实现 Boss 直聘职位浏览、OCR 分析、技术匹配及自动生成并发送沟通话术的无GUI全自动化技能。

Registry SourceRecently Updated
General

Maker For Beginners

Turn a 60-second phone recording of a product into 1080p polished MP4 videos just by typing what you need. Whether it's creating simple edited videos without...

Registry SourceRecently Updated
General

Discrawl Search

Search Discord message history via discrawl SQLite database. Use when the user asks about past conversations, previous discussions, historical messages, or a...

Registry SourceRecently Updated