Deepfake Awareness Guide

Recognize AI-generated media manipulation and protect yourself and your family.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "Deepfake Awareness Guide" with this command: npx skills add harrylabsj/deepfake-awareness-guide

Deepfake Awareness Guide

Overview

Deepfake Awareness Guide is an educational resource for understanding AI-generated media manipulation, including deepfakes and synthetic media. It covers how these technologies work conceptually, how to recognize them, the social and personal risks they pose, and how to discuss digital media skepticism with family members — especially teens. This skill promotes healthy skepticism, not paranoia.

This skill does not create or facilitate deepfake creation. Detection guidance is educational, not forensic.

When to Use

Use this skill when the user asks to:

  • Understand what deepfakes are
  • Learn how to spot fake videos
  • Explore AI fake news detection
  • Protect family from fake media
  • Teach kids about AI-generated content

Trigger phrases: "What are deepfakes?", "How to spot fake videos", "AI fake news detection", "Protect family from fake media", "Teaching kids about AI-generated content"

Workflow

Step 1 — Greet and Assess

Acknowledge the user's concern about synthetic media. Ask:

  • What prompted their interest? (a specific incident, general awareness, protecting family)
  • Who are they most concerned about? (themselves, children, elderly relatives, students)
  • What is their current familiarity with AI-generated media?

Step 2 — Explain Deepfakes and Synthetic Media

Provide an accessible conceptual explanation:

  • Deepfakes: AI-generated or manipulated video/audio that makes it appear someone said or did something they didn't
  • Synthetic media: Broader category including AI-generated images, voices, text, and video
  • How it works (conceptually): AI models learn patterns from real media and generate new content that mimics those patterns — not just "copy and paste"
  • Why it's hard to detect: Quality is improving rapidly; what was obviously fake last year may be convincing today

Emphasize: detection is an ongoing challenge. No single technique is foolproof.

Step 3 — Recognizing Synthetic Media

Teach common indicators (educational, not guaranteed):

Video deepfake indicators:

  • Facial inconsistencies: Unnatural blinking, mismatched lip-sync, odd skin texture around face edges
  • Lighting mismatches: Face lighting doesn't match the scene lighting
  • Audio artifacts: Robotic or inconsistent voice quality, mismatched emotional tone
  • Physical anomalies: Strange hair movement, odd reflections in eyes, unnatural head movements
  • Context clues: Does the content align with what you know about the person? Is the source reputable?

Audio deepfake indicators:

  • Unusual pauses or pacing
  • Lack of natural breathing sounds
  • Inconsistent emotional expression
  • Background noise that doesn't match the claimed environment

Image indicators:

  • Refer to AI Image Literacy skill for image-specific detection

Emphasize: these are red flags, not proof. When in doubt, verify through independent trusted sources.

Step 4 — Understand the Risks

Discuss why deepfakes matter:

  • Misinformation: Fake political statements, fabricated events, false narratives spread quickly
  • Personal harm: Non-consensual synthetic media, reputational damage, fraud (e.g., fake voice calls for scams)
  • Erosion of trust: When everything could be fake, people may distrust authentic content too
  • Social polarization: Deepfakes can be used to inflame divisions

Step 5 — Protection and Response

Provide actionable guidance:

For individuals:

  • Verify surprising content through multiple independent sources before sharing
  • Be extra skeptical of emotionally charged content — deepfakes often target strong reactions
  • Check the original source: who created this? Where did it first appear?
  • Use reverse image/video search when possible

For families (especially with teens):

  • Discuss synthetic media openly — don't wait for an incident
  • Teach "pause before sharing" as a family norm
  • Explain that seeing is no longer believing
  • Set expectations about verifying sources for school projects and social sharing
  • Create a family rule: if something seems shocking, verify first

If you encounter a harmful deepfake:

  • Do not share it, even to criticize it (sharing amplifies harm)
  • Report it on the platform where you found it
  • Support the affected person if you know them
  • Document evidence if needed for authorities

Step 6 — Summarize and Exit

Recap key takeaways:

  • Synthetic media technology is real and improving
  • Detection is hard and not guaranteed — source verification is the best defense
  • Healthy skepticism beats both blind trust and paranoia
  • Families benefit from open, ongoing conversations about digital media
  • Suggest related skills: AI Image Literacy for visual media specifics, Digital Information Hygiene for broader information consumption habits

Safety & Compliance

  • Does not create or facilitate deepfake creation
  • Detection guidance is educational, not forensic
  • Does not analyze specific media for authenticity
  • Encourages healthy skepticism, not paranoia
  • Does not target specific individuals or political content
  • Does not provide instructions for generating deceptive synthetic media
  • This is a descriptive prompt-flow skill with zero code execution, zero network calls, and zero credential requirements

Acceptance Criteria

  1. User expresses concern about synthetic media; output includes a conceptual explanation of how deepfakes work
  2. Detection indicators are presented as educational red flags, not guarantees
  3. Risks are discussed without promoting fear or paranoia
  4. Actionable protection guidance is provided for individuals and families
  5. Explicitly refuses to provide instructions for creating deepfakes or deceptive synthetic media

Examples

Example 1: Parent of a Teen

User says: "I heard about deepfakes at my daughter's school. How do I talk to her about this?"

Skill guides: Assess the daughter's age and digital exposure. Explain deepfakes at an age-appropriate level. Teach common video indicators. Provide conversation starters and family norms (pause before sharing, verify sources). Emphasize that the goal is healthy skepticism, not fear. Suggest checking in regularly as the technology evolves.

Example 2: User Who Saw a Suspicious Video

User says: "I saw a video of a politician saying something outrageous. How do I know if it's real?"

Skill guides: Walk through the verification steps: check the source, look for facial and audio indicators, search for coverage from reputable news outlets, check if the person's official channels address it. Emphasize: when in doubt, don't share. Explain that high-quality deepfakes exist and source verification matters more than visual inspection alone.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Novel Writer V3.2 - 小说写作引擎

专业小说写作引擎V3.2,支持短篇(3章)到超长篇(500万字)。内置AI味量化检测、四层质检、伏笔管理、角色状态追踪、断点续传。 自动根据字数裁剪流程:短篇模式(<10章)/ 中篇模式(10-50章)/ 长篇模式(50章+)。 触发场景:写小说、小说大纲、小说创作、网文写作、长篇小说、百万字小说、章节规划、 角...

Registry SourceRecently Updated
1300Profile unavailable
General

Monet Works Content Pack

Content quality and compliance tools for financial writing. Automated QA remediation pipeline detects and fixes banned phrases, missing disclaimers, missing...

Registry SourceRecently Updated
1550Profile unavailable
General

Humanizer-zh (Elatia Enhanced)

去除文本中的 AI 生成痕迹。当用户说"润色"、"改写"、"去 AI 味"、"更像人写的"、 "不要太机械"、"自然一点"、"有人味"时使用。也用于:编辑邮件/文案/文章/报告、 审阅 AI 生成内容、优化写作风格、让文字不那么像机器生成的。基于维基百科"AI 写作特征"指南, 检测并修复:夸大象征、宣传语言、模...

Registry SourceRecently Updated
1.9K3Profile unavailable
General

Humanizer

Remove signs of AI-generated writing from text. Detects and fixes 24 common AI patterns including inflated symbolism, promotional language, superficial analy...

Registry SourceRecently Updated
3080Profile unavailable