AI Answer Verification Brief

Turns an important AI-generated answer into a claim ledger, source-check plan, contradiction scan, confidence rating, and concise verification brief.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "AI Answer Verification Brief" with this command: npx skills add harrylabsj/ai-answer-verification-brief

AI Answer Verification Brief

Overview

Use this skill to help a user verify an AI-generated answer before they rely on it. The skill breaks the answer into checkable claims, separates facts from interpretations, identifies what needs outside confirmation, and produces a short verification brief with confidence levels and unresolved questions.

The goal is not to prove that an answer is right. The goal is to make reliance safer by showing what was checked, what remains uncertain, and what should be verified with authoritative sources.

When to Use

Use this skill when the user asks to:

  • verify an AI answer
  • fact-check a model response
  • check for hallucinations
  • review sources behind an answer
  • decide whether an AI-generated summary is reliable
  • prepare a verification note before using AI output in work, school, research, or a public document

Trigger keywords: AI answer verification, fact check AI, hallucination check, source check, claim ledger, verify this response, is this AI answer reliable, confidence brief

Required Inputs

Ask for only what is needed:

  • The AI answer or excerpt to verify
  • The user's intended use for the answer
  • Any sources already supplied with the answer
  • The level of risk if the answer is wrong: low, medium, or high
  • Any required source standard, such as official documentation, peer-reviewed research, company records, or recent news

Do not ask for private credentials, hidden documents, or personal data that is not necessary for verification.

Workflow

  1. Frame the reliance risk. Identify how the user plans to use the answer and classify the consequence of error as low, medium, or high.
  2. Extract a claim ledger. Split the answer into atomic claims. Label each claim as factual, numerical, causal, quoted, legal/medical/financial, interpretive, or recommendation-based.
  3. Prioritize what to check. Rank claims by risk, novelty, specificity, and likelihood of hallucination. High-risk claims must be verified first.
  4. Define source standards. For each priority claim, name the kind of source that would count as acceptable evidence. Prefer primary, official, or current sources where relevant.
  5. Run a contradiction scan. Look for internal inconsistencies, unsupported leaps, outdated facts, suspiciously precise details, missing dates, and claims that depend on jurisdiction or context.
  6. Build the verification brief. Summarize checked claims, uncertain claims, source gaps, recommended follow-up searches, and a confidence rating.
  7. Give a reliance recommendation. State whether the answer is ready to use as-is, usable with caveats, needs more verification, or should not be relied on.

Output Format

Produce a concise brief with these sections:

  1. Reliance Context
    • Intended use
    • Risk level
    • Verification standard
  2. Claim Ledger
    • Claim
    • Claim type
    • Risk level
    • Evidence needed
    • Status: verified, likely, uncertain, contradicted, or not checked
  3. Red Flags
    • Missing citations
    • Outdated or jurisdiction-sensitive claims
    • Overconfident wording
    • Unsupported numbers, names, quotes, or dates
  4. Source-Check Plan
    • Best sources to consult
    • Search phrases to use
    • Claims that need primary-source confirmation
  5. Verification Brief
    • What looks reliable
    • What remains uncertain
    • What should be changed before use
  6. Confidence Rating
    • High, medium, low, or do not rely
    • One-sentence rationale

Safety & Compliance

Explicit Boundaries

  • No professional certification. This skill does not certify truth, accuracy, safety, compliance, or suitability for any use.
  • No substitute for expert review. Legal, medical, financial, safety-critical, academic integrity, and regulatory claims require qualified or authoritative review.
  • No fabricated citations. Never invent sources, quotes, page numbers, links, authors, statistics, or publication dates.
  • No overclaiming. If a claim has not been checked against an acceptable source, mark it as not checked or uncertain.
  • No credential collection. Do not ask for passwords, API keys, private account access, or confidential data unless the user independently provides non-sensitive excerpts.
  • No hidden browsing claim. If no live search or source review occurred, say so clearly.

Additional Safety Notes

  • Treat model-generated text as unverified until checked.
  • Prefer source categories over specific citations when sources have not actually been reviewed.
  • For fast-moving topics, require date-aware verification.
  • For jurisdiction-dependent topics, require location or jurisdiction before making a confidence statement.

Acceptance Criteria

  1. Extracts a claim ledger from the AI answer.
  2. Separates factual claims, interpretations, and recommendations.
  3. Prioritizes high-risk and highly specific claims for verification.
  4. Names acceptable source types without fabricating citations.
  5. Includes a contradiction and red-flag scan.
  6. Provides a confidence rating with caveats.
  7. Clearly states whether live source checking was performed.
  8. Avoids legal, medical, financial, or safety-critical advice beyond verification framing.
  9. Never asks for credentials or sensitive account access.

Example

User says: "Can you check whether this AI answer about a new tax rule is reliable?"

Skill response: Build a claim ledger, flag jurisdiction and date dependence, require official tax authority or qualified professional confirmation, mark unchecked claims as uncertain, and produce a verification brief that warns against relying on the answer without expert or official review.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

bilibili-reader-skill

B站收藏夹视频智能总结:随机选取收藏视频,阅读字幕/评论/弹幕,生成中英双语总结PDF

Registry SourceRecently Updated
General

OpenClaw Growth Engineer

OpenClaw-first growth autopilot for mobile apps. Correlate analytics, crashes, billing, feedback, store signals, and repo context into proposal drafts that c...

Registry SourceRecently Updated
General

Atoll Api

Interact with the Atoll project management API for managing tasks, projects, goals, KPIs, initiatives, milestones, comments, members, teams, labels, dependen...

Registry SourceRecently Updated
General

task-distributor

Expert task distributor specializing in intelligent work allocation, load balancing, and queue management. Masters priority scheduling, capacity tracking, an...

Registry SourceRecently Updated