AI Answer Verification Brief
Overview
Use this skill to help a user verify an AI-generated answer before they rely on it. The skill breaks the answer into checkable claims, separates facts from interpretations, identifies what needs outside confirmation, and produces a short verification brief with confidence levels and unresolved questions.
The goal is not to prove that an answer is right. The goal is to make reliance safer by showing what was checked, what remains uncertain, and what should be verified with authoritative sources.
When to Use
Use this skill when the user asks to:
- verify an AI answer
- fact-check a model response
- check for hallucinations
- review sources behind an answer
- decide whether an AI-generated summary is reliable
- prepare a verification note before using AI output in work, school, research, or a public document
Trigger keywords: AI answer verification, fact check AI, hallucination check, source check, claim ledger, verify this response, is this AI answer reliable, confidence brief
Required Inputs
Ask for only what is needed:
- The AI answer or excerpt to verify
- The user's intended use for the answer
- Any sources already supplied with the answer
- The level of risk if the answer is wrong: low, medium, or high
- Any required source standard, such as official documentation, peer-reviewed research, company records, or recent news
Do not ask for private credentials, hidden documents, or personal data that is not necessary for verification.
Workflow
- Frame the reliance risk. Identify how the user plans to use the answer and classify the consequence of error as low, medium, or high.
- Extract a claim ledger. Split the answer into atomic claims. Label each claim as factual, numerical, causal, quoted, legal/medical/financial, interpretive, or recommendation-based.
- Prioritize what to check. Rank claims by risk, novelty, specificity, and likelihood of hallucination. High-risk claims must be verified first.
- Define source standards. For each priority claim, name the kind of source that would count as acceptable evidence. Prefer primary, official, or current sources where relevant.
- Run a contradiction scan. Look for internal inconsistencies, unsupported leaps, outdated facts, suspiciously precise details, missing dates, and claims that depend on jurisdiction or context.
- Build the verification brief. Summarize checked claims, uncertain claims, source gaps, recommended follow-up searches, and a confidence rating.
- Give a reliance recommendation. State whether the answer is ready to use as-is, usable with caveats, needs more verification, or should not be relied on.
Output Format
Produce a concise brief with these sections:
- Reliance Context
- Intended use
- Risk level
- Verification standard
- Claim Ledger
- Claim
- Claim type
- Risk level
- Evidence needed
- Status: verified, likely, uncertain, contradicted, or not checked
- Red Flags
- Missing citations
- Outdated or jurisdiction-sensitive claims
- Overconfident wording
- Unsupported numbers, names, quotes, or dates
- Source-Check Plan
- Best sources to consult
- Search phrases to use
- Claims that need primary-source confirmation
- Verification Brief
- What looks reliable
- What remains uncertain
- What should be changed before use
- Confidence Rating
- High, medium, low, or do not rely
- One-sentence rationale
Safety & Compliance
Explicit Boundaries
- No professional certification. This skill does not certify truth, accuracy, safety, compliance, or suitability for any use.
- No substitute for expert review. Legal, medical, financial, safety-critical, academic integrity, and regulatory claims require qualified or authoritative review.
- No fabricated citations. Never invent sources, quotes, page numbers, links, authors, statistics, or publication dates.
- No overclaiming. If a claim has not been checked against an acceptable source, mark it as not checked or uncertain.
- No credential collection. Do not ask for passwords, API keys, private account access, or confidential data unless the user independently provides non-sensitive excerpts.
- No hidden browsing claim. If no live search or source review occurred, say so clearly.
Additional Safety Notes
- Treat model-generated text as unverified until checked.
- Prefer source categories over specific citations when sources have not actually been reviewed.
- For fast-moving topics, require date-aware verification.
- For jurisdiction-dependent topics, require location or jurisdiction before making a confidence statement.
Acceptance Criteria
- Extracts a claim ledger from the AI answer.
- Separates factual claims, interpretations, and recommendations.
- Prioritizes high-risk and highly specific claims for verification.
- Names acceptable source types without fabricating citations.
- Includes a contradiction and red-flag scan.
- Provides a confidence rating with caveats.
- Clearly states whether live source checking was performed.
- Avoids legal, medical, financial, or safety-critical advice beyond verification framing.
- Never asks for credentials or sensitive account access.
Example
User says: "Can you check whether this AI answer about a new tax rule is reliable?"
Skill response: Build a claim ledger, flag jurisdiction and date dependence, require official tax authority or qualified professional confirmation, mark unchecked claims as uncertain, and produce a verification brief that warns against relying on the answer without expert or official review.