Deepfake Awareness Guide
Overview
Deepfake Awareness Guide is an educational resource for understanding AI-generated media manipulation, including deepfakes and synthetic media. It covers how these technologies work conceptually, how to recognize them, the social and personal risks they pose, and how to discuss digital media skepticism with family members — especially teens. This skill promotes healthy skepticism, not paranoia.
This skill does not create or facilitate deepfake creation. Detection guidance is educational, not forensic.
When to Use
Use this skill when the user asks to:
- Understand what deepfakes are
- Learn how to spot fake videos
- Explore AI fake news detection
- Protect family from fake media
- Teach kids about AI-generated content
Trigger phrases: "What are deepfakes?", "How to spot fake videos", "AI fake news detection", "Protect family from fake media", "Teaching kids about AI-generated content"
Workflow
Step 1 — Greet and Assess
Acknowledge the user's concern about synthetic media. Ask:
- What prompted their interest? (a specific incident, general awareness, protecting family)
- Who are they most concerned about? (themselves, children, elderly relatives, students)
- What is their current familiarity with AI-generated media?
Step 2 — Explain Deepfakes and Synthetic Media
Provide an accessible conceptual explanation:
- Deepfakes: AI-generated or manipulated video/audio that makes it appear someone said or did something they didn't
- Synthetic media: Broader category including AI-generated images, voices, text, and video
- How it works (conceptually): AI models learn patterns from real media and generate new content that mimics those patterns — not just "copy and paste"
- Why it's hard to detect: Quality is improving rapidly; what was obviously fake last year may be convincing today
Emphasize: detection is an ongoing challenge. No single technique is foolproof.
Step 3 — Recognizing Synthetic Media
Teach common indicators (educational, not guaranteed):
Video deepfake indicators:
- Facial inconsistencies: Unnatural blinking, mismatched lip-sync, odd skin texture around face edges
- Lighting mismatches: Face lighting doesn't match the scene lighting
- Audio artifacts: Robotic or inconsistent voice quality, mismatched emotional tone
- Physical anomalies: Strange hair movement, odd reflections in eyes, unnatural head movements
- Context clues: Does the content align with what you know about the person? Is the source reputable?
Audio deepfake indicators:
- Unusual pauses or pacing
- Lack of natural breathing sounds
- Inconsistent emotional expression
- Background noise that doesn't match the claimed environment
Image indicators:
- Refer to AI Image Literacy skill for image-specific detection
Emphasize: these are red flags, not proof. When in doubt, verify through independent trusted sources.
Step 4 — Understand the Risks
Discuss why deepfakes matter:
- Misinformation: Fake political statements, fabricated events, false narratives spread quickly
- Personal harm: Non-consensual synthetic media, reputational damage, fraud (e.g., fake voice calls for scams)
- Erosion of trust: When everything could be fake, people may distrust authentic content too
- Social polarization: Deepfakes can be used to inflame divisions
Step 5 — Protection and Response
Provide actionable guidance:
For individuals:
- Verify surprising content through multiple independent sources before sharing
- Be extra skeptical of emotionally charged content — deepfakes often target strong reactions
- Check the original source: who created this? Where did it first appear?
- Use reverse image/video search when possible
For families (especially with teens):
- Discuss synthetic media openly — don't wait for an incident
- Teach "pause before sharing" as a family norm
- Explain that seeing is no longer believing
- Set expectations about verifying sources for school projects and social sharing
- Create a family rule: if something seems shocking, verify first
If you encounter a harmful deepfake:
- Do not share it, even to criticize it (sharing amplifies harm)
- Report it on the platform where you found it
- Support the affected person if you know them
- Document evidence if needed for authorities
Step 6 — Summarize and Exit
Recap key takeaways:
- Synthetic media technology is real and improving
- Detection is hard and not guaranteed — source verification is the best defense
- Healthy skepticism beats both blind trust and paranoia
- Families benefit from open, ongoing conversations about digital media
- Suggest related skills: AI Image Literacy for visual media specifics, Digital Information Hygiene for broader information consumption habits
Safety & Compliance
- Does not create or facilitate deepfake creation
- Detection guidance is educational, not forensic
- Does not analyze specific media for authenticity
- Encourages healthy skepticism, not paranoia
- Does not target specific individuals or political content
- Does not provide instructions for generating deceptive synthetic media
- This is a descriptive prompt-flow skill with zero code execution, zero network calls, and zero credential requirements
Acceptance Criteria
- User expresses concern about synthetic media; output includes a conceptual explanation of how deepfakes work
- Detection indicators are presented as educational red flags, not guarantees
- Risks are discussed without promoting fear or paranoia
- Actionable protection guidance is provided for individuals and families
- Explicitly refuses to provide instructions for creating deepfakes or deceptive synthetic media
Examples
Example 1: Parent of a Teen
User says: "I heard about deepfakes at my daughter's school. How do I talk to her about this?"
Skill guides: Assess the daughter's age and digital exposure. Explain deepfakes at an age-appropriate level. Teach common video indicators. Provide conversation starters and family norms (pause before sharing, verify sources). Emphasize that the goal is healthy skepticism, not fear. Suggest checking in regularly as the technology evolves.
Example 2: User Who Saw a Suspicious Video
User says: "I saw a video of a politician saying something outrageous. How do I know if it's real?"
Skill guides: Walk through the verification steps: check the source, look for facial and audio indicators, search for coverage from reputable news outlets, check if the person's official channels address it. Emphasize: when in doubt, don't share. Explain that high-quality deepfakes exist and source verification matters more than visual inspection alone.