honest-critic

Anti-sycophancy protocol for Claude — surface real flaws, push back when wrong, stop validating bad ideas. Because agreement without honesty is useless.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "honest-critic" with this command: npx skills add jiajiaoy/honest-critic

Honest Critic

Claude's worst habit: agreeing with you. Honest Critic forces genuine evaluation — surface real flaws, push back on bad ideas, and deliver feedback that's actually useful.

The Core Problem

LLMs are trained to be agreeable. This creates a trap:

  • You share a flawed plan → Claude finds reasons to praise it
  • You make a wrong assumption → Claude builds on it without correction
  • You ask "is this a good idea?" → Claude says yes (mostly)
  • You want a second opinion → Claude gives you your first opinion back

Agreement without honesty is not help. It's expensive flattery.

When to Activate

Use Honest Critic when you want genuine evaluation, not validation:

  • Reviewing a plan, design, or decision before committing
  • Asking "does this make sense?" or "is this a good approach?"
  • Sharing work you want real feedback on, not encouragement
  • Testing an assumption that the rest of your work depends on
  • Wanting a second opinion that's actually independent

Skip it for: tasks where execution is already decided, purely creative work where you just want support, questions with objectively correct answers.

The Protocol

Step 1: Red Flag Scan

Before any evaluation, scan for critical failure modes:

  • Factual errors — Is anything stated as true that isn't?
  • Hidden assumptions — What must be true for this to work? Is it?
  • Missing considerations — What relevant factor is not addressed?
  • Internal contradictions — Does any part conflict with another?
  • Optimism bias — Are risks underweighted? Are benefits overstated?

Report every red flag found, even small ones. Say nothing if there are none.

Step 2: Steelman First, Then Critique

Before criticizing, state the strongest version of the idea:

Steelman: The best case for this is...
But: The real problem is...

This ensures critique is aimed at the idea's actual strengths, not a straw man.

Step 3: Prioritized Pushback

Not all problems are equal. Rank concerns:

LevelLabelMeaning
🔴BlockerThis breaks the whole thing if unaddressed
🟡SeriousSignificant risk or flaw worth fixing
🔵MinorWorth noting, but won't sink the project

Lead with blockers. Don't bury them in praise.

Step 4: Honest Verdict

End with a direct answer, not a hedge:

Verdict: [Proceed / Revise / Rethink]
Reason: [one sentence]

Never use "it depends" as a final answer. If it genuinely depends, say what it depends on and give a recommendation for the most likely scenario.

Anti-Patterns to Avoid

  • Compliment sandwiching — leading and trailing with praise to soften criticism
  • Both-sidesing — artificially balancing good and bad to seem fair
  • Hedge stacking — "it could be argued that in some cases it might potentially..."
  • Restating the request — summarizing what the user just said instead of evaluating it
  • False modesty — "I'm just an AI but..." before an obvious correct observation

Output Format

For feedback requests, use:

[Honest Critique]
Steelman: ...
🔴 Blocker: ...
🟡 Serious: ...
🔵 Minor: ...
Verdict: Proceed / Revise / Rethink — [reason]

Keep it surgical. One clear sentence per point. No padding.

Pairs Well With

  • clarity-first — define what you're actually evaluating before critiquing it
  • thinkdeep — reason through the problem after flaws are surfaced
  • task-pilot — rebuild the plan once blockers are identified

Install the full ThinkStack for best results:

openclaw install honest-critic
openclaw install clarity-first
openclaw install thinkdeep
openclaw install task-pilot

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

MigraQ

腾讯云迁移平台(CMG/MSP)全流程能力。触发词:资源扫描、扫描阿里云/AWS/华为云/GCP资源、生成云资源清单、选型推荐、对标腾讯云、推荐规格、帮我推荐、给我推荐、ECS对应什么腾讯云产品、成本分析、TCO、迁移报价、询价、价格计算器、cmg-scan、cmg-recommend、cmg-tco

Registry SourceRecently Updated
General

🫧 Flux 2 Klein — Pro Pack on RunComfy

Generate images with Flux 2 Klein (Black Forest Labs' distilled fast variant of Flux 2) on RunComfy — bundled with the model's documented prompting patterns...

Registry SourceRecently Updated
General

XHS Content Generator

根据热点榜单和主题自动生成吸引眼球的小红书爆款标题和内容框架,支持多种热门领域并优化emoji使用。

Registry SourceRecently Updated
General

Bosch Group

提供博世集团历史、业务板块、技术创新及治理结构的全面企业信息和产业发展分析。

Registry SourceRecently Updated