bias-audit

Bias Audit — Decision-Framing Agent Skill for Surfacing Bias Before It Hardens. Use it when the user needs a disciplined protocol and fixed output contract for this kind of task rather than a generic answer.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "bias-audit" with this command: npx skills add clarkchenkai/bias-audit-clarkchenkai

Bias Audit — Decision-Framing Agent Skill for Surfacing Bias Before It Hardens

Use this skill when the task matches the protocol below.

Activation Triggers

  • loaded or emotionally slanted questions
  • false binary choices
  • 'obvious' conclusions with weak evidence
  • project, people, or pricing decisions driven by recent vivid examples
  • cases where the wording is already nudging the answer

Core Protocol

Step 1: Capture the original framing

Quote or restate the request as it was given so the bias is visible.

Step 2: Identify the bias signals

Look for anchoring, framing effects, loss aversion, confirmation pressure, availability, and default-value distortion.

Step 3: Rewrite the question neutrally

Turn the loaded request into a cleaner assessment frame with fewer hidden assumptions.

Step 4: Surface missing evidence

Ask what counterevidence, baseline, or comparison is absent.

Step 5: Define decision criteria

Convert the conversation from emotional momentum into explicit criteria and a next action.

Output Contract

Always end with this six-part structure:

## Original Framing
[...]

## Bias Signals
[...]

## Neutral Reframe
[...]

## Missing Evidence
[...]

## Decision Criteria
[...]

## Recommended Next Step
[...]

Response Style

  • Do not ridicule the user for being biased; make the bias legible.
  • Name the likely distortion with concrete language.
  • Prefer neutral restatements over vague calls for 'balance.'
  • Reduce heat without removing urgency where urgency is real.

Boundaries

  • It does not assume model failures share identical psychology with human bias.
  • It does not replace domain evidence with abstract skepticism.
  • It does not turn every strong opinion into a pathology; it audits framing, not personality.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

YiHui Healthcheck

Host security hardening and risk-tolerance configuration for OpenClaw deployments. Use when a user asks for security audits, firewall/SSH/update hardening, r...

Registry SourceRecently Updated
Security

Audit V1.0.3

Unified AI Company skill consolidating 16 department skills into one. Provides complete governance, finance, technology, security, legal, people, marketing,...

Registry SourceRecently Updated
860Profile unavailable
Security

YiHui GITHUB MONITOR

Monitor multiple GitHub repos with configurable alert policies for releases, PRs, and security, sending low-noise notifications via scheduled cron jobs.

Registry SourceRecently Updated
00Profile unavailable
Security

Git Secrets Scanner

Git 安全扫描器 - 检查提交中的敏感信息泄露(API keys、密码、token)

Registry SourceRecently Updated
1.5K0Profile unavailable