AI Bias Detector
Overview
AI Bias Detector teaches users to recognize and mitigate different types of bias that appear in AI outputs. It covers training data bias, representation bias, cultural bias, and linguistic bias — providing practical detection checklists and prompting strategies to reduce biased responses. This skill builds critical awareness without promoting distrust of all AI.
This skill is educational. It does not claim to eliminate bias — it teaches awareness and mitigation, not solutions.
When to Use
Use this skill when the user asks to:
- Understand whether AI is biased
- Learn how to detect AI bias
- Explore the AI stereotypes problem
- Understand fairness in AI
- Examine AI cultural bias
Trigger phrases: "Is AI biased?", "How to detect AI bias", "AI stereotypes problem", "Fairness in AI", "AI cultural bias"
Workflow
Step 1 — Greet and Assess
Acknowledge the user's interest in bias awareness. Ask:
- What prompted their concern? (a specific AI output, general curiosity, professional need)
- In what domains do they use AI? (writing, research, decision support, creative work)
- Their current awareness level: have they noticed potential bias before?
Step 2 — Explain Why AI Is Biased
Provide a conceptual explanation of bias in AI systems:
- Training data bias: Models learn from historical data that may reflect past inequities, stereotypes, or underrepresentation
- Representation bias: Certain groups, cultures, or perspectives may be underrepresented in training data, leading to skewed outputs
- Cultural bias: Default assumptions often reflect the dominant cultural context of the training data (e.g., Western, English-speaking, tech-industry perspectives)
- Linguistic bias: Non-English languages or dialects may receive lower-quality outputs; certain terms carry unintended connotations
- Temporal bias: Training data has a cutoff date, so recent cultural shifts may be missing or misrepresented
Emphasize: bias is a technical and social phenomenon, not a moral failing of individual users.
Step 3 — Detection Checklist
Teach users how to spot bias in AI outputs:
Representation red flags:
- Does the output assume a default demographic (age, gender, ethnicity, nationality) when none was specified?
- Are certain roles or professions consistently associated with specific groups?
- Does the output ignore or erase the existence of certain populations?
Cultural red flags:
- Does the output assume Western norms as universal? (holidays, family structures, work culture, values)
- Are non-English contexts treated as afterthoughts?
- Does the output conflate "global" with "English-speaking developed world"?
Linguistic red flags:
- Does the output shift in quality or tone when the language or dialect changes?
- Are certain terms used in ways that carry unintended stereotypes?
Framing red flags:
- Does the output present one perspective as neutral or objective when it is actually contested?
- Are loaded assumptions embedded in seemingly factual statements?
Step 4 — Mitigation Prompting Strategies
Teach techniques for reducing bias in AI interactions:
- Explicit diversity: Request multiple perspectives explicitly ("Describe this from three different cultural viewpoints")
- Counterfactual framing: Ask "What if the opposite were true?" or "What would a critic say?"
- Specify context: Provide cultural, temporal, and demographic context so the AI does not assume defaults
- Check for blind spots: Ask "What perspectives might be missing from this analysis?"
- Cross-language verification: For important topics, compare outputs in different languages if possible
Step 5 — Practice with Examples
Offer to analyze a sample prompt/output together, or provide illustrative examples:
- Show an unbiased-looking output that contains hidden assumptions
- Demonstrate how reframing the prompt produces a more balanced response
- Practice spotting the red flags in a concrete example
Step 6 — Summarize and Exit
Recap the bias awareness framework. Emphasize:
- Bias detection is a skill that improves with practice
- No prompt fully eliminates bias — awareness is the goal
- Critical thinking matters more than blind trust or blanket rejection of AI
- Suggest related skills: Hallucination Detective for factual accuracy, AI Ethics Compass for broader ethical reflection
Safety & Compliance
- Educational about bias as a technical and social phenomenon
- Does not make political claims about specific groups
- Does not claim to eliminate bias — teaches awareness and mitigation, not solutions
- Does not encourage adversarial or malicious use of bias knowledge
- Presents balanced critical thinking, not distrust of all AI
- This is a descriptive prompt-flow skill with zero code execution, zero network calls, and zero credential requirements
Acceptance Criteria
- User expresses concern about bias; output explains sources of AI bias conceptually
- A practical detection checklist with red-flag patterns is provided
- At least 3 mitigation prompting strategies are taught
- The tone promotes balanced critical thinking, not fear or dismissal of AI
- Does not make political claims or claim to fully eliminate bias
Examples
Example 1: User Noticing Stereotypes
User says: "I asked AI to describe a 'successful entrepreneur' and it always describes a young white man in tech. What's going on?"
Skill guides: Validate the observation. Explain representation bias and default assumptions. Walk through the detection checklist. Teach mitigation: "Describe successful entrepreneurs from diverse industries, ages, and backgrounds." Practice reframing the prompt. Discuss why this happens in training data.
Example 2: Researcher Seeking Balanced Perspectives
User says: "I use AI to summarize research on social policy. How do I make sure I'm not getting a biased summary?"
Skill guides: Assess the research domains. Teach framing red flags and source bias. Provide mitigation strategies: request multiple ideological perspectives, ask for limitations of each view, specify geographic and cultural context. Emphasize that AI summaries are starting points, not substitutes for reading primary sources.