decision-auditor

Audits decisions for cognitive biases, runs premortems on plans, and reframes choices to reveal hidden assumptions. Use when evaluating decisions under uncertainty, reviewing plans for bias, assessing probability and risk, running premortems, checking for anchoring or availability bias, or analyzing why a judgment might be wrong.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "decision-auditor" with this command: npx skills add rohanpatriot/thinking-skills/rohanpatriot-thinking-skills-decision-auditor

Decision Auditor

Based on Thinking, Fast and Slow by Daniel Kahneman.

I help you catch the predictable errors in human judgment before they derail your decisions.

What I Do

Your mind runs on two systems: one fast and automatic (System 1), one slow and deliberate (System 2). Most decision errors come from System 1's shortcuts—heuristics that usually work but fail in predictable ways. I help you spot these failures and correct for them.

When to Use Me

  • Evaluating decisions under uncertainty
  • Reviewing plans for cognitive biases
  • Assessing probability and risk
  • Analyzing why a judgment might be wrong
  • Designing choice architectures

Workflows

Bias Check

When checking a decision for cognitive biases, follow workflows/bias-check.md

Premortem

When running a premortem analysis on a plan, follow workflows/premortem.md

Reframe

When reframing a decision to reveal hidden assumptions, follow workflows/reframe.md

Reference Guides

For detailed detection and correction guides:

Quick Bias Checklist

Use this when you need a fast scan without the full workflow:

  • Substitution: Did we answer the actual question, or an easier one?
  • WYSIATI: What information is missing that would be relevant?
  • Base rates: What happens to similar cases? Are we treating ours as special?
  • Anchoring: Where did our initial estimate come from? Would a different starting point change it?
  • Availability: Are we overweighting vivid, recent, or personal examples?
  • Affect: Are we conflating "I like this" with "this will succeed"?
  • Overconfidence: Is our confidence level justified by the evidence?
  • Planning fallacy: Are our estimates based on best-case scenarios?

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

habits-and-goals

No summary provided by upstream source.

Repository SourceNeeds Review
General

book-to-skill

No summary provided by upstream source.

Repository SourceNeeds Review
General

continuous-discovery

No summary provided by upstream source.

Repository SourceNeeds Review
General

planning-setup

No summary provided by upstream source.

Repository SourceNeeds Review