claims-extractor

Claims Extractor (peer review)

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "claims-extractor" with this command: npx skills add willoscar/research-units-pipeline-skills/willoscar-research-units-pipeline-skills-claims-extractor

Claims Extractor (peer review)

Goal: turn a manuscript into an auditable list of claims that downstream skills can check.

Inputs

Required:

  • output/PAPER.md (or equivalent plain-text manuscript)

Optional:

  • DECISIONS.md (review scope or constraints)

Outputs

  • output/CLAIMS.md

Output format (recommended)

For each claim, include at minimum:

  • Claim : one sentence

  • Type : empirical | conceptual

  • Scope : what the claim applies to / what it does not apply to

  • Source : a locatable pointer into output/PAPER.md (section + page/figure/table + a short quote)

Workflow

  • If DECISIONS.md exists, apply any review scope/format constraints.

  • Read the manuscript (output/PAPER.md ) end-to-end (at least abstract + intro + method + experiments + limitations).

  • Extract:

  • primary contributions (what is new)

  • key claims (what is asserted)

  • assumptions (what must be true for claims to hold)

  • Normalize each item into one sentence.

  • Attach a source pointer for every item.

  • Split into two sections:

  • Empirical claims (must be backed by experiments/data)

  • Conceptual claims (must be backed by argument/definition)

Definition of Done

  • output/CLAIMS.md exists.

  • Every claim has a source pointer that can be located in output/PAPER.md .

  • Empirical vs conceptual claims are clearly separated.

Troubleshooting

Issue: the paper is only a PDF or HTML

Fix:

  • Convert/extract it into a plain-text output/PAPER.md first (even rough extraction is OK), then run claim extraction.

Issue: claims are vague (“significant”, “better”, “state-of-the-art”)

Fix:

  • Rewrite each claim to include the measurable dimension (metric/dataset/baseline) or mark it as “underspecified” with a note.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

pdf-text-extractor

No summary provided by upstream source.

Repository SourceNeeds Review
Research

latex-compile-qa

No summary provided by upstream source.

Repository SourceNeeds Review
Research

draft-polisher

No summary provided by upstream source.

Repository SourceNeeds Review