Cognitive Bias Buster

Recognize and counter 50+ cognitive biases in real-time thinking, decisions, and judgments.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "Cognitive Bias Buster" with this command: npx skills add harrylabsj/cognitive-bias-buster

Cognitive Bias Buster

Overview

Cognitive Bias Buster helps users recognize when their thinking is being distorted by systematic cognitive errors — and provides practical countermeasures for each. Drawing from decades of research in behavioral economics and cognitive psychology (Kahneman, Tversky, Ariely, and others), this skill catalogs 50+ biases organized into 6 families, with real-time detection prompts and de-biasing techniques for each.

This skill does not eliminate biases — it makes them visible so users can compensate. Biases are features of human cognition, not bugs to be "fixed." The goal is better awareness, not perfect rationality.

When to Use

Use this skill when the user asks to:

  • Identify cognitive biases
  • Check for bias in a decision
  • Understand why their judgment might be off
  • List common thinking errors
  • De-bias a specific situation
  • Learn about cognitive psychology
  • Review a decision for bias

Trigger phrases: "Cognitive bias", "Thinking error", "Why am I wrong?", "Bias check", "Am I biased?", "Mental blind spot", "De-bias", "Thinking trap", "Judgment error"

Workflow

Step 1 — Context Gathering

Understand what the user is analyzing:

  • What decision, belief, or judgment are you examining?
  • What is at stake? (Low, medium, high stakes)
  • How much time do you have? (Quick scan vs. deep review)
  • Are you reviewing something already decided, or preventing bias in an upcoming decision?
  • Who else is involved? (Solo decision, group decision, advice for someone else)

Step 2 — Bias Family Scan

Run through the 6 bias families and flag which might be active in this context. For each family, ask the diagnostic questions.

Family 1: Too Much Information (Filtering Biases)

We are bombarded with information; we filter aggressively — and often wrongly.

BiasSignalCountermeasure
Availability HeuristicYou judge likelihood by how easily examples come to mindAsk: "What data am I NOT seeing?"
Attentional BiasYou only notice what you're already looking forDeliberately seek disconfirming evidence
AnchoringFirst number/info disproportionately influences youResist giving the first number; seek multiple anchors
Confirmation BiasYou seek, interpret, and remember info that supports what you already believeActively look for evidence against your view
Observer-Expectancy EffectYou unconsciously influence outcomes to match expectationsBlind yourself to conditions when possible
Selection BiasYour sample is not representative of the wholeCheck: "Who is missing from this data?"

Diagnostic questions: What information came first? What am I ignoring? What would prove me wrong?

Family 2: Not Enough Meaning (Pattern-Making Biases)

We compulsively find patterns and meaning — even in random noise.

BiasSignalCountermeasure
Apophenia / PatternicityYou see meaningful patterns in random dataAsk: "Is this pattern real or am I imposing it?"
Clustering IllusionYou see streaks or clusters that are statistically expectedCheck base rates and sample size
Gambler's FallacyYou believe past independent events affect future probabilitiesRemind yourself: coins have no memory
Hindsight BiasYou believe past events were more predictable than they wereDocument predictions before outcomes are known
Illusion of ControlYou overestimate your influence over outcomesDistinguish skill from luck in this context
Framing EffectThe same info presented differently changes your choiceReframe the decision in opposite terms
Sunk Cost FallacyYou continue because of past investment, not future valueAsk: "If I hadn't started this, would I start now?"

Diagnostic questions: Am I seeing a real pattern or imposing one? Would I interpret this differently if the outcome were unknown?

Family 3: Need to Act Fast (Action Biases)

We must act quickly to survive — so we jump to conclusions and favor the familiar.

BiasSignalCountermeasure
Action BiasYou prefer doing something over doing nothing, even when waiting is betterAsk: "What is the cost of waiting 24 hours?"
Hyperbolic DiscountingYou strongly prefer immediate rewards over larger future onesImagine the future self and calculate annualized value
Status Quo BiasYou prefer things stay the sameAsk: "If the status quo weren't an option, what would I choose?"
Default EffectYou stick with pre-set optionsConsciously reject defaults and compare alternatives
Optimism BiasYou believe you're less likely to experience negative eventsLook at base rates for people like you
OverconfidenceYou overestimate your knowledge or abilitiesEstimate confidence, then halve it
Dunning-Kruger EffectLow competence leads to overconfidence; high competence to underconfidenceSeek external feedback on your actual skill level

Diagnostic questions: Am I rushing because of pressure or real urgency? What would a neutral observer recommend?

Family 4: What Should We Remember? (Memory Biases)

We only keep a tiny fraction of what we experience — and we distort even that.

BiasSignalCountermeasure
Peak-End RuleYou judge experiences by their peak and end, not averageCalculate the actual average, not just the highlight
Rosy RetrospectionYou remember the past as better than it wasCheck contemporaneous records or journals
Telescoping EffectYou misdate events (recent feel older, old feel recent)Verify dates from records
SuggestibilityYou adopt memories suggested by othersDocument events independently before discussing
False MemoryYou remember things that didn't happenTreat vivid memories with skepticism if uncorroborated
Context EffectRecall depends on the context you're in nowRevisit the original context mentally or physically

Diagnostic questions: Is this memory or a reconstruction? What would my diary from that time say?

Family 5: Social & Group Biases

We evolved to coordinate in groups — but groupthink has its own distortions.

BiasSignalCountermeasure
Bandwagon EffectYou believe/do things because many others doAsk: "Would I believe this if nobody else did?"
Authority BiasYou attribute greater accuracy to authority figuresEvaluate the claim independently of who said it
Halo EffectOne positive trait colors your whole perceptionEvaluate traits independently
In-Group FavoritismYou favor people in your groupConsciously evaluate out-group members on merits
Out-Group HomogeneityYou see out-groups as more alike than they areLearn individual differences within out-groups
GroupthinkGroups prioritize harmony over critical evaluationAssign a devil's advocate role
False ConsensusYou overestimate how much others agree with youSurvey a diverse sample anonymously
Fundamental Attribution ErrorYou attribute others' behavior to character but your own to circumstanceConsider situational factors for others too
Self-Serving BiasYou attribute successes to yourself, failures to circumstancesReverse it: what role did luck play in success?
Naive RealismYou believe you see reality objectively while others are biasedAccept that your perception is also constructed

Diagnostic questions: Would I believe this if it came from a stranger? Am I conforming or reasoning?

Family 6: Probability & Statistical Biases

We are terrible at intuiting probabilities — and we consistently get the math wrong.

BiasSignalCountermeasure
Base Rate NeglectYou ignore general statistical info in favor of specific infoAlways start with the base rate
Conjunction FallacyYou believe two conditions together are more likely than one aloneRemember: P(A and B) ≤ P(A)
Zero-Risk BiasYou prefer to reduce a small risk to zero over a greater reduction in a larger riskCompare absolute risk reduction, not relative
Survivorship BiasYou focus on successes and ignore failuresAsk: "What happened to the ones who failed?"
Neglect of ProbabilityYou treat all non-zero probabilities as similarAssign rough numerical estimates
Law of Small NumbersYou generalize from small samplesCheck sample size before drawing conclusions
Berkson's ParadoxYou misinterpret correlation in selected populationsConsider how the sample was selected

Diagnostic questions: What are the actual numbers? What is the sample size? What is the base rate?

Step 3 — Deep-Dive on Flagged Biases

For each bias family that scored high risk, go deeper:

  • Which specific biases in this family are most likely active?
  • What evidence would confirm or disconfirm each?
  • What is the direction of distortion? (Overestimate, underestimate, ignore, distort)
  • What would a completely neutral person conclude?

Step 4 — Apply Countermeasures

For the top 2–3 most relevant biases, select and apply countermeasures:

  • Pre-mortem: Imagine the decision failed and work backward to why
  • Red team: Ask someone to argue against your position
  • Reference class forecasting: Look at outcomes for similar situations
  • Blinding: Remove identifying info when evaluating options
  • Checklist: Use a structured checklist rather than intuition
  • Sleep on it: Introduce time delay for important decisions
  • Algorithm / rule: Use a pre-committed decision rule
  • External view: Ask what advice you'd give a friend in the same situation

Step 5 — Document and Review

Create a brief bias audit record:

  • Decision or judgment analyzed
  • Biases flagged (with confidence level: low/medium/high)
  • Countermeasures applied
  • Revised conclusion (if different from initial intuition)
  • Review date: when will you check if the decision was sound?

Safety & Compliance

  • This skill provides educational awareness of cognitive biases; it does not claim to eliminate them
  • No psychological diagnosis or treatment is offered
  • For decisions with major consequences (health, legal, financial), recommend consulting professionals in addition to bias-checking
  • Biases are normal human features, not personal flaws — the tone should be compassionate, not shaming
  • Some biases conflict with each other in practice; the skill helps users navigate trade-offs, not find single "right" answers

Acceptance Criteria

  1. At least 50 biases are cataloged across 6 families
  2. Each bias includes: name, signal (how to detect it), and countermeasure
  3. Diagnostic questions are provided for each family
  4. At least 8 de-biasing techniques are available
  5. The workflow adapts to stakes level and time available
  6. The tone is educational and non-judgmental
  7. At least 3 examples show bias detection in real scenarios
  8. Documentation template is provided for bias audits

Examples

Example 1: Job Offer Decision

User says: "I got a job offer with a higher salary but I'm worried about leaving my current team."

Skill guides:

  • Context: High-stakes career decision, medium time available
  • Flagged biases:
    • Status quo bias (preferring current situation)
    • Sunk cost fallacy (investment in current relationships)
    • Loss aversion (fear of losing current stability)
    • Anchoring (fixating on the salary number)
    • Optimism bias (assuming new role will be great)
  • Countermeasures:
    • Pre-mortem: Imagine staying and being unhappy in 2 years — why?
    • External view: What would you advise a friend with this offer?
    • Reference class: What happened to others who made similar moves?
    • Sleep on it: Wait 48 hours before responding
  • Revised analysis: Evaluate total compensation, growth trajectory, cultural fit, and risk tolerance separately from the anchor of salary alone

Example 2: Investment Decision

User says: "My friend made a lot on crypto. Should I invest too?"

Skill guides:

  • Context: Financial decision, high risk of multiple biases
  • Flagged biases:
    • Bandwagon effect (because friend did it)
    • Availability heuristic (recent success stories are vivid)
    • Survivorship bias (not hearing from those who lost)
    • Optimism bias (believing you'll succeed where others might not)
    • Overconfidence (understanding of the asset)
  • Countermeasures:
    • Red team: Argue why this is a terrible idea
    • Reference class: What percentage of crypto investors lose money?
    • Blinding: Evaluate the asset without knowing friend's outcome
  • Safety: Redirect to professional financial advisor for investment advice

Example 3: Disagreement with a Partner

User says: "My partner and I keep arguing about the same thing. I know I'm right."

Skill guides:

  • Context: Interpersonal conflict, recurring pattern
  • Flagged biases:
    • Confirmation bias (only remembering evidence supporting your view)
    • Fundamental attribution error (attributing partner's view to character)
    • Naive realism (believing you see objectively, they are biased)
    • Self-serving bias (framing the conflict in ways that favor you)
    • False consensus (assuming neutral observers would agree with you)
  • Countermeasures:
    • Red team: Write the strongest version of your partner's argument
    • External view: What would a couples counselor say?
    • Blind review: Summarize the dispute as if describing two strangers
  • Revised approach: Shift from "who is right" to "what is the shared goal"

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

决策清单

基于查理·芒格多元思维和认知偏差检查,系统评估投资想法的能力圈、风险和决策合理性,辅助科学投资。

Registry SourceRecently Updated
2180Profile unavailable
General

Syléa

Personal life coach & decision assistant. Analyzes dilemmas with a probability-based framework, tracks life goals across 5 psychological dimensions, and runs...

Registry SourceRecently Updated
2050Profile unavailable
Research

🧠 Thinking Frameworks

Provides structured deep analysis and decision-making using 20 human thinking frameworks like critical thinking, first principles, red team, and design think...

Registry SourceRecently Updated
5391Profile unavailable
Research

Thinking Framework

Loads any thinker's, leader's, philosopher's, or organization's complete mental operating system directly into the AI — so the AI reasons FROM inside that co...

Registry SourceRecently Updated
3640Profile unavailable