scientific-inquiry

For non-expert propositions, rapidly build evidence chains, assess credibility, and deliver conclusions with confidence intervals. Includes a mandatory user confirmation step to prevent misalignment. Self-evolves based on user corrections and optimizations from each session.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "scientific-inquiry" with this command: npx skills add liwenyu2002/scientific-inquiry

🧪 Scientific Inquiry

Triggers

This skill MUST be triggered when the user asks any of the following:

  • Fact-checking: "Is X true?", "Is X reliable?", "Is this claim correct?"
  • Data research: "What's the trend/change/distribution of X?", "Look up data on X", "What's the price of X right now?"
  • Industry research: "How's the X market?", "Analyze the X industry", "Current state of X"
  • Claim verification: "I heard X, do you think it's reliable?", "Does this conclusion hold up?"
  • Comparative evaluation: "Which is better, X or Y?", "What's the difference between X and Y?"
  • User explicitly mentions "verify", "research", "investigate", "analyze", "validate", "look into"

Even if the question seems simple (e.g., "look up a data point"), trigger this skill whenever systematic information gathering is involved.

Core Workflow

Step 0: Problem Framing → User Confirmation (Critical! Prevents misalignment)

Upon receiving a question, do NOT search yet. Output a research plan template first:

📋 Research Plan

Question: [Restate the original proposition to ensure alignment]

Research type: Fact-check / Data research / Industry research / Comparative evaluation / Trend analysis

Sub-question breakdown:

  1. [Sub-proposition A] — Verifiability: High/Medium/Low → [Expected evidence source]
  2. [Sub-proposition B] — Verifiability: High/Medium/Low → [Expected evidence source]
  3. [Sub-proposition C] — Verifiability: High/Medium/Low → [Expected evidence source]

Research method:

  • Primary search path: [Specific search tool / API / database]
  • Keyword strategy: [Key search terms]
  • Fallback if critical data unavailable: [Alternative approach / estimation method]

Expected output:

  • Anticipated confidence level: High / Medium / Low
  • Main uncertainty factors: [Expected blind spots]

✅ Ready to proceed with this direction? ↳ Waiting for your confirmation before executing Steps 1–4

Do not invoke any search tools until the user confirms.

Step 1: Decompose into Sub-questions

Break down vague questions into verifiable atomic statements. Label each sub-proposition with:

  • Verifiability: High (public data / literature exists) / Medium (indirect evidence available) / Low (almost no public information)
  • Evidence type: Quantitative (specific numbers) / Qualitative (trend assessment)
  • Source direction: e.g., customs data, academic papers, media reports, forum discussions

Step 2: Evidence Search (Classify & Grade)

Every piece of evidence found must be labeled with source and grade:

S-tier: Primary academic literature / Official statistics / Raw data APIs
A-tier: Authoritative media reports / Think-tank reports / Well-cited secondary sources
B-tier: Industry analysis / Forum discussions / Indirect data
C-tier: Self-media / Single samples / Non-expert interpretations
D-tier: No source / Hearsay / Obvious conflicts of interest

Prioritize S/A-tier evidence during search. B/C-tier is supplementary only. After searching, compile an evidence table with: Evidence content | Source | Source link (full URL) | Grade | Related sub-proposition

Source links are mandatory — do not omit them. Even for web search results, provide the source page URL. If multiple items point to different articles on the same site, give each its own independent URL.

Step 3: Cross-Validation

For each sub-proposition:

  • At least 2 independent evidence sources
  • Label relationships between evidence: Consistent / Contradictory / Complementary
  • If contradictory, analyze possible causes (methodology differences / conflicts of interest / different time windows / sample bias)

Step 4: Conclusion Output

Proposition: [One-sentence restatement]

Conclusion:
  High confidence (≥80%) → Direct assertion
  Medium confidence (50–80%) → Main judgment + key uncertainties
  Low confidence (<50%) → Best guess + what information is needed to reach a conclusion

Key evidence (Top-3, with source links):
  1. [Evidence A] — S-tier — [Source Name](full URL)
  2. [Evidence B] — A-tier — [Source Name](full URL)
  3. [Evidence C] — B-tier — [Source Name](full URL)

Core uncertainties (if any):
  - [Uncertainty 1]
  - [Uncertainty 2]

Self-Evolution Rules

After each use of this skill, proactively check for the following triggers and update immediately:

  1. Direction correction: "This sub-question isn't the priority", "Different angle" → Update the decomposition template, adding the new angle as an example

  2. Evidence standards: "This source isn't enough", "I want more official data", "Don't use X" → Update evidence grading criteria or preferred search paths

  3. Format preferences: "The conclusion is too long", "Give me a short conclusion first", "Output in X format" → Update the conclusion output template, add format options

  4. New scenarios: "Not just fact-checking, also data research", "Also X type" → Update trigger conditions and scenario type table

  5. Methodology improvements: "You should present a plan first before executing", "You missed X" → Update workflow description

  6. User repeatedly corrects the same type of issue (≥2 occurrences of the same mistake) → Add to "Common Pitfalls"

How to update: Use skill_manage(action='patch', name='scientific-inquiry', old_string=..., new_string=...) to patch directly. Never make the same mistake twice. After updating, record the change in the changelog at the bottom of this file for traceability.

Scenario Types (expand as needed)

ScenarioCharacteristicsWatch out for
Fact-checkVerify the truth of a specific claimPrioritize original sources; beware of second-hand retellings
Trend analysisAssess the future direction of a metricDistinguish short-term fluctuation from long-term trend; note data windows
Comparative analysisCompare merits of two optionsEnsure comprehensive comparison dimensions; avoid survivorship bias
Attribution analysisDetermine whether A caused BDistinguish correlation from causation; watch for confounding variables

Quality Checklist

  • Step 0 research plan output and user confirmation obtained?
  • Each sub-proposition has at least one evidence source?
  • Every piece of evidence labeled with a grade?
  • Contradictory evidence analyzed for possible causes?
  • Conclusion includes confidence level and uncertainties?
  • User corrections from this session have been applied to the skill?

Common Pitfalls

  • Never skip the confirmation step: Even if the path seems obvious, Step 0 is mandatory. A wrong direction wastes more time than a quick confirmation.
  • Don't only seek supporting evidence: Actively search for opposing views to avoid confirmation bias.
  • Distinguish "no evidence found" from "evidence disproves": Not finding something doesn't mean it doesn't exist. Label as "not found" rather than "disproven".
  • Mind data timeliness: Especially for prices, policies, and time-sensitive data — always note when the data was collected.
  • Report progress during long searches: If a search exceeds 5 steps, sync progress with the user after each step. Don't go silent.

Usage Examples

(To be expanded — add typical workflows here after each use for reference)


Changelog

DateChangeReason
2026-05-09Initial creationUser requested systematic inquiry capability
2026-05-09Added Step 0 (plan confirmation phase)User feedback: "align direction before executing"
2026-05-09Expanded trigger conditions: data research, industry research, etc.User feedback: "not just fact-checking, also data research"
2026-05-09Refined research plan template: research types, search paths, fallback methodsUser requested clearer research methodology description
2026-05-09Strengthened self-evolution rules: 6 trigger types + concrete update methodsUser requested: "proactively evolve the skill with each session's iterative optimizations"

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Web3

cetus

Provides TypeScript SDKs for building DeFi apps on Sui with Cetus Protocol components like AMMs, vaults, farms, limit orders, xCETUS token, and aggregation.

Registry SourceRecently Updated
Web3

stable-layer-sdk

A TypeScript SDK for interacting with Stable Layer on Sui blockchain, supporting minting, burning stablecoins, claiming yield rewards, and querying supply.

Registry SourceRecently Updated
Web3

Nano (XNO)

You are a Nano (XNO) wallet operator and protocol expert. Use this skill for ANY task involving Nano/XNO: sending or receiving funds, checking balances, gene...

Registry SourceRecently Updated
Web3

Home Repair Contractor Hiring Kit

Helps homeowners define repair scope, compare contractor quotes, organize documentation, and plan milestone payments without providing legal advice or replac...

Registry SourceRecently Updated