AI Ethics Compass

Navigate everyday ethical questions around AI use with practical frameworks.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "AI Ethics Compass" with this command: npx skills add harrylabsj/ai-ethics-compass

AI Ethics Compass

Overview

AI Ethics Compass helps users navigate everyday ethical questions around AI use through practical frameworks. It covers transparency (when to disclose AI use), fairness, privacy, intellectual property, environmental impact, and educational integrity. Rather than providing ethical verdicts, it equips users with structured ways to think through their own situations.

When to Use

Use this skill when the user asks to:

  • Evaluate whether a specific use of AI is ethically sound
  • Decide if and how to disclose AI assistance
  • Understand intellectual property concerns with AI-generated content
  • Navigate academic or professional integrity questions with AI
  • Consider the broader impacts of their AI use

Trigger phrases: "Is it ethical to use AI for this?", "Should I disclose that I used AI?", "AI and plagiarism concerns", "What are the ethics of AI-generated content?", "Environmental impact of AI"

Workflow

Step 1 — Greet and Establish the Framework

Acknowledge that ethics questions rarely have simple answers. Introduce the approach: you will provide frameworks for thinking through the situation, not verdicts. Clarify that you do not provide legal advice.

Step 2 — Understand the Situation

Ask structured questions:

  • What is the specific use case? (content creation, learning, work, creative, etc.)
  • Who is the audience, and what expectations do they have?
  • What are the stakes? (personal, professional, academic, public)
  • What policies or norms already exist in this context? (workplace, school, platform)

Step 3 — Apply the Ethical Lenses

Walk through key ethical dimensions:

  • Transparency: Would a reasonable person want to know AI was involved? When is disclosure expected vs. optional?
  • Fairness: Does this use disadvantage others? Am I claiming credit for AI-generated work?
  • Privacy: Am I sharing others' data with AI? Am I respecting consent?
  • Intellectual property: Is the AI output derivative of copyrighted material? What are the current norms?
  • Environmental impact: What is the proportional environmental cost of this AI use?
  • Integrity: Does this align with stated policies, institutional rules, or professional standards?

Step 4 — Explore Trade-Offs

Help the user map the tensions in their situation:

  • Efficiency vs. authenticity
  • Convenience vs. transparency
  • Innovation vs. caution
  • Personal benefit vs. collective impact

Step 5 — Decision Guidance (Not Verdict)

Guide the user to their own conclusion:

  • What would they advise a friend in the same situation?
  • What would happen if everyone made the same choice?
  • What is the most transparent version of this action?
  • What does their institution or community expect?

Step 6 — Summarize and Exit

Summarize the framework applied and the user's own reasoning. Remind them that ethical AI use is a practice, not a checklist. Suggest related skills for deeper exploration.

Safety & Compliance

  • Does not provide legal advice or make ethical judgments
  • Presents frameworks for reflection, not prescriptions
  • Does not endorse specific political or ideological positions
  • Does not encourage AI use where prohibited by institution or law
  • This is a descriptive prompt-flow skill with zero code execution, zero network calls, and zero credential requirements

Acceptance Criteria

  1. User's ethical concern is explored through structured questioning
  2. At least 3 ethical dimensions (transparency, fairness, privacy, etc.) are applied
  3. User is guided to their own conclusion, not given a verdict
  4. Legal advice is explicitly not provided
  5. Institutional policies are referenced as important context

Examples

Example 1: Disclosure Dilemma

User says: "I used AI to help draft an important work email. Should I tell my boss?"

Skill guides: Explore transparency expectations in their workplace. Consider the email's stakes and audience. Walk through the question: would the recipient feel misled if they later learned AI was used? Guide to a personal conclusion.

Example 2: Educational Integrity

User says: "Is it cheating if I use AI to help me understand a textbook chapter, then write the essay myself?"

Skill guides: Distinguish between AI as a learning aid vs. AI as a substitute. Explore what their institution's academic integrity policy says. Map the difference between "AI helped me understand" and "AI wrote it for me."

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

Ethics Guardrails — Ethical Principles for AI Agents (Bots Matter)

Publish ethical guardrails for your AI agent — three questions, one template, no auth required. Declare what your agent will never do, how it resolves value...

Registry SourceRecently Updated
4383Profile unavailable
General

AI 树德:义商本体伦理安全系统

Comprehensive AI ethics safety and authenticity monitoring based on Instinctual Integrity Quotient (IIQ) theory. Detects three alienation patterns, ensures v...

Registry SourceRecently Updated
4121Profile unavailable
General

AI Governance Policy Builder

Framework to establish AI governance, assess AI maturity, manage algorithmic risks, conduct impact assessments, classify AI system risk, and ensure regulator...

Registry SourceRecently Updated
6930Profile unavailable
Security

SealVera

Tamper-evident audit trail for AI agent decisions. Use when logging LLM decisions, setting up AI compliance, auditing agents for EU AI Act, HIPAA, GDPR or SO...

Registry SourceRecently Updated
5480Profile unavailable