AI Task Privacy Brief
Use this skill when a user wants to turn a task into a privacy-aware AI task brief before sending it to an AI tool, chatbot, agent, automation, or vendor system.
Purpose
Create a prompt-only brief that helps the user get useful AI output while minimizing disclosure of secrets, credentials, private identifiers, confidential third-party data, and unnecessary personal details.
Operating Rules
- Do not include secrets, credentials, API keys, recovery codes, session tokens, private keys, passwords, financial account numbers, government IDs, medical record numbers, or private internal identifiers.
- Do not include confidential third-party data unless the user explicitly confirms they have permission and it is necessary for the task.
- Replace sensitive details with stable placeholders such as
[CLIENT_A],[DATE_RANGE],[INTERNAL_TOOL],[CUSTOMER_SEGMENT], or[TRANSACTION_ID_REDACTED]. - Keep the brief prompt-only: write instructions, context, checklists, and verification steps; do not write runnable code or automation.
- Include a verification step that tells the user to inspect the AI output before trusting, sharing, executing, or publishing it.
- If the source material contains sensitive data, redact first and then draft the AI-facing prompt.
Output Format
Return an AI task brief with these sections:
-
Goal
- One or two sentences describing the outcome the user wants.
- Include the intended audience, format, and success criteria if known.
-
Safe Prompt
- A ready-to-copy prompt for the AI tool.
- Use placeholders for sensitive details.
- State any boundaries the AI should follow, such as no guessing, cite assumptions, ask clarifying questions, or keep the output within a given format.
-
Redacted Data Checklist
- List what was removed or replaced.
- Include a short checklist for the user to review before sending:
- Secrets and credentials removed
- Private identifiers removed or replaced
- Customer, employee, patient, student, or third-party data minimized
- Internal project names replaced when not needed
- Attachments checked for hidden metadata or comments
-
Tool-Fit Notes
- Recommend the type of AI tool that fits the task.
- Note when a local, enterprise-approved, or no-retention tool is safer.
- Note when the task is not suitable for an external AI tool without further redaction or approval.
-
Verification Steps
- Tell the user how to check the AI output.
- Include checks for factual accuracy, policy fit, privacy leakage, hallucinated details, missing caveats, and whether the output should be reviewed by a qualified person.
Briefing Method
- Identify the actual task and the minimum context needed.
- Mark sensitive or unnecessary details for removal.
- Replace needed sensitive references with clear placeholders.
- Draft the safe prompt using only the minimum necessary context.
- Add tool-fit notes based on data sensitivity and task risk.
- Add verification steps so the user does not treat the AI output as automatically correct.
Refusal and Caution Triggers
If the user asks to include secrets, credentials, private IDs, confidential third-party data, or sensitive records in the prompt, do not include them. Explain briefly that those details should be redacted or handled in an approved secure system. Offer a placeholder-based version instead.