paper-audit

Paper Audit Skill v2.0

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "paper-audit" with this command: npx skills add bahayonghang/academic-writing-skills/bahayonghang-academic-writing-skills-paper-audit

Paper Audit Skill v2.0

Unified academic paper auditing across formats (LaTeX, Typst, PDF) and languages (English, Chinese). Runs automated checks, computes dimension scores, and optionally dispatches multi-perspective review agents.

Capability Summary

  • Run automated paper checks across .tex , .typ , and .pdf inputs.

  • Produce self-check, peer-review, gate, polish, and re-audit outputs with explicit severity and priority labels.

  • Combine script findings, venue-specific checklist items, and optional agent synthesis into one report flow.

  • Reuse sibling writing-skill scripts for LaTeX and Typst inputs instead of re-implementing duplicate checks.

Triggering

Use this skill when the user wants to:

  • run a pre-submission readiness audit

  • simulate a reviewer-style critique

  • make a pass/fail submission gate decision

  • compare a revised paper against a previous audit

  • audit a PDF when the source format is unavailable

Trigger it even when the user only says “check my paper”, “review this submission”, “is this ready to submit?”, or “re-audit against the old report”.

Do Not Use

  • fixing the paper source as the first step when the project still fails to compile badly

  • full literature research or survey drafting

  • writing a paper from scratch

  • template-specific LaTeX or Typst editing when the user wants direct source surgery instead of an audit report

Critical Rules

  • NEVER modify \cite{} , \ref{} , \label{} , math environments, or any content listed in $SKILL_DIR/references/FORBIDDEN_TERMS.md

  • NEVER fabricate bibliography entries; only verify existing .bib or .yml files

  • NEVER change domain terminology without explicit user confirmation

  • ALWAYS distinguish [Script] (automated) findings from [LLM] (agent judgment) assessments in output

  • All dimension scores from scripts are indicators, not definitive judgments

Mode Selection Guide

Mode When to Use Output Speed

self-check

Pre-submission readiness check Scores + issues + checklist ~30s

review

Simulate multi-perspective peer review Agent review reports + synthesis + revision roadmap ~2min

gate

CI/CD quality gate, binary pass/fail PASS/FAIL verdict + blocking issues ~15s

polish

Expression refinement via agents Precheck JSON + Critic/Mentor agent dispatch ~1min+

re-audit

Verify revisions against prior report Verification checklist + new issues + score delta ~1min

Mode Selection Logic

"Check my paper" -> self-check "Review my paper" / "peer review" -> review "Is this ready to submit?" -> gate "Polish the writing" -> polish "Did I fix the issues?" / "re-check" -> re-audit

Steps

All Modes (Common)

  • Parse $ARGUMENTS for file path and mode. If missing, ask the user for the target .tex , .typ , or .pdf file.

  • Read $SKILL_DIR/references/REVIEW_CRITERIA.md for scoring framework.

  • Read $SKILL_DIR/references/CHECKLIST.md for universal + venue-specific checklist items.

  • Run the orchestrator: uv run python -B "$SKILL_DIR/scripts/audit.py" $ARGUMENTS .

  • Present the Markdown report directly to the user.

Self-Check Mode

  • Review scores and highlight any Critical/P0 issues that block submission.

  • If --scholar-eval is present, read $SKILL_DIR/references/SCHOLAR_EVAL_GUIDE.md and formulate LLM assessments for Novelty, Significance, Ethics, and Reproducibility. Provide as --llm-json on a second run.

Review Mode (Multi-Perspective)

  • Read $SKILL_DIR/references/SCHOLAR_EVAL_GUIDE.md for LLM assessment dimensions.

  • Read $SKILL_DIR/references/quality_rubrics.md for scoring anchors and decision mapping.

  • Phase 0 (automated): The script output provides automated findings and scores.

  • Phase 1 (agents): For each agent in $SKILL_DIR/agents/ :

  • Read the agent definition file for persona and protocol.

  • Dispatch a Task with: agent definition + paper content + Phase 0 results as context.

  • Agents: methodology_reviewer_agent.md , domain_reviewer_agent.md , critical_reviewer_agent.md .

  • Phase 2 (synthesis): Read $SKILL_DIR/agents/synthesis_agent.md and dispatch a Task to consolidate all reviews.

  • Input: Phase 0 automated results + Phase 1 agent reviews.

  • Output: Consensus classification, merged scores, final review report, revision roadmap.

  • Read $SKILL_DIR/templates/review_report_template.md for output structure.

  • Present synthesized report following the template format.

Gate Mode

  • Report PASS or FAIL based on: zero Critical issues AND all checklist items pass.

  • List blocking issues (Critical only) and failed checklist items.

Polish Mode

  • Read .polish-state/precheck.json generated by the script.

  • If blockers detected, report them and ask user to resolve before polishing.

  • Read $SKILL_DIR/references/POLISH_GUIDE.md for style targets and critic protocol.

  • Spawn nested tasks for the Critic Agent and Mentor Agents as defined in the polish workflow.

Re-Audit Mode

  • Requires --previous-report PATH pointing to a prior audit report.

  • Script runs fresh checks and compares against previous findings.

  • Present verification checklist: each prior issue classified as FULLY_ADDRESSED / PARTIALLY_ADDRESSED / NOT_ADDRESSED .

  • Report any NEW issues introduced during revision.

  • Show score comparison (before vs after).

Required Inputs

  • A target .tex , .typ , or .pdf file.

  • An audit mode, or enough intent to infer one from the mode-selection guide.

  • Optional venue or journal context when the checklist should be venue-specific.

  • Optional --previous-report PATH for re-audit .

If the user omits the mode, infer it using the selection guide and state the assumption before running the audit.

Output Contract

  • Always return a report, not raw script output.

  • Keep [Script] and [LLM] findings visibly separated.

  • Include the selected mode, target file, and venue context near the top of the report.

  • For blocking failures, list the exact blocking issue(s) and failed checklist items first.

  • When a script or nested agent step fails, report the command, exit code, and what coverage was skipped.

  • Preserve the source; this skill audits and synthesizes, it does not rewrite the paper by default.

Venue-Specific Behavior

When --venue (or --journal ) is specified, the audit adds venue-specific checks:

Venue Key Rules

neurips

9-page limit, broader impact statement, paper checklist, double-blind

iclr

10-page limit, reproducibility statement, double-blind

icml

8-page limit, impact statement, 50MB supplementary limit

ieee

Abstract <=250 words, 3-5 keywords, >=300 DPI figures

acm

CCS concepts required, acmart class, rights management

thesis-zh

GB/T 7714-2015 bibliography, bilingual abstract, university template

Without --venue , only universal checklist items apply.

Output Protocol

Issue Format

[Severity: Critical|Major|Minor] [Priority: P0|P1|P2]: message (Line N)

Severity Definitions

Severity Impact Score Deduction (4-dim)

Critical Blocks submission -1.5 per issue

Major Significant quality concern -0.75 per issue

Minor Style/formatting improvement -0.25 per issue

Source Labeling

  • [Script] — Automated check result (objective, reproducible)

  • [LLM] — Agent/LLM judgment (subjective, evidence-based)

Scoring Systems

4-Dimension Score (1.0-6.0, base 6.0 with deductions)

Dimension Weight Primary Checks

Quality 30% logic, bib, gbt7714

Clarity 30% format, grammar, sentences, consistency, references, visual, figures

Significance 20% logic, checklist

Originality 20% deai, checklist

8-Dimension ScholarEval (1.0-10.0, optional via --scholar-eval )

Dimension Weight Source

Soundness 20% Script

Clarity 15% Script

Presentation 10% Script

Novelty 15% LLM

Significance 15% LLM

Reproducibility 10% Mixed

Ethics 5% LLM

Overall 10% Computed

See $SKILL_DIR/references/quality_rubrics.md for score-level descriptors and decision mapping.

Integration with Sibling Skills

Paper-audit reuses check scripts from sibling skills via format-based routing:

Format Script Source Checks Available

.tex (English) latex-paper-en/scripts/

format, grammar, logic, sentences, deai, bib, figures

.tex (Chinese) latex-thesis-zh/scripts/ (primary), latex-paper-en/scripts/ (fallback)

  • consistency, gbt7714

.typ

typst-paper/scripts/

format, grammar, logic, sentences, deai

.pdf

paper-audit/scripts/ only visual, pdf_parser (no format/bib/figures checks)

Scripts that live in paper-audit itself: audit.py , check_references.py , visual_check.py , pdf_parser.py , detect_language.py , parsers.py , report_generator.py , scholar_eval.py .

Agent References

Agent Definition File Role

Methodology Reviewer $SKILL_DIR/agents/methodology_reviewer_agent.md

Research design, statistical rigor, reproducibility

Domain Reviewer $SKILL_DIR/agents/domain_reviewer_agent.md

Literature coverage, theoretical framework, contribution

Critical Reviewer $SKILL_DIR/agents/critical_reviewer_agent.md

Core argument challenges, logical fallacies, overclaims

Synthesis Agent $SKILL_DIR/agents/synthesis_agent.md

Consolidate reviews, consensus classification, revision roadmap

Reference Files

Reference Purpose Used By

references/REVIEW_CRITERIA.md

4-dimension scoring framework All modes

references/CHECKLIST.md

Universal + venue-specific checklists self-check, gate

references/SCHOLAR_EVAL_GUIDE.md

8-dimension ScholarEval scoring guide review (with --scholar-eval)

references/quality_rubrics.md

Score-level descriptors and decision mapping review, self-check

references/AUDIT_GUIDE.md

User guide for modes and report interpretation Reference

references/POLISH_GUIDE.md

Style targets and critic/mentor protocol polish

references/FORBIDDEN_TERMS.md

Protected content (citations, math, terminology) All modes

references/QUICK_REFERENCE.md

Check support matrix and CLI quick reference Reference

references/editorial_decision_standards.md

Consensus rules and decision matrix review (synthesis)

Example Requests

  • “Run a self-check on paper.tex and tell me what blocks submission.”

  • “Review this paper like a harsh reviewer and give me a revision roadmap.”

  • “Is paper.pdf ready to submit to IEEE, or does it fail the gate?”

  • “Re-audit this revision against my previous report and tell me which issues are still open.”

Templates

Template Purpose

templates/audit_report_template.md

Output structure for self-check/gate

templates/review_report_template.md

Output structure for multi-perspective review

templates/revision_roadmap_template.md

Prioritized revision action plan

Quality Standards

Dimension Requirement

Evidence-based Every weakness must cite specific text, line, or section from the paper

Specificity Avoid vague comments; provide exact locations and concrete suggestions

Balance Report both strengths and weaknesses; never only criticize

Actionability Each issue must include a specific improvement suggestion

Source transparency Always label findings as [Script] or [LLM]

Format consistency All reports follow the corresponding template structure

Constructive tone Professional and helpful; avoid dismissive language

Examples

Self-Check

uv run python -B "$SKILL_DIR/scripts/audit.py" paper.tex --mode self-check uv run python -B "$SKILL_DIR/scripts/audit.py" paper.tex --mode self-check --journal neurips

Review (Multi-Perspective)

uv run python -B "$SKILL_DIR/scripts/audit.py" paper.tex --mode review --scholar-eval

Then follow Steps 8-12 to dispatch review agents.

Gate (CI/CD)

uv run python -B "$SKILL_DIR/scripts/audit.py" paper.tex --mode gate --journal ieee --format json

Polish

uv run python -B "$SKILL_DIR/scripts/audit.py" paper.tex --mode polish

Re-Audit

uv run python -B "$SKILL_DIR/scripts/audit.py" paper.tex --mode re-audit --previous-report report_v1.md

PDF Input

uv run python -B "$SKILL_DIR/scripts/audit.py" paper.pdf --mode self-check --pdf-mode enhanced

See $SKILL_DIR/examples/ for complete output examples.

Troubleshooting

Problem Solution

No file path provided Ask user for a valid .tex , .typ , or .pdf file

Script execution fails Report the command, exit code, and stderr output

Missing sibling skill scripts Check that latex-paper-en/scripts/ , latex-thesis-zh/scripts/ , or typst-paper/scripts/ exist

PDF checks limited PDF mode skips format/bib/figures checks; only visual and content analysis available

--venue not recognized Use one of: neurips , iclr , icml , ieee , acm , thesis-zh

ScholarEval LLM dimensions show N/A Run with --scholar-eval , then provide LLM scores via --llm-json

Re-audit missing previous report Provide --previous-report PATH pointing to the prior audit output

Changelog

Version Date Changes

2.0 2026-03-11 Full rewrite: venue filtering, multi-perspective review agents, re-audit mode, templates, examples, quality rubrics

1.0 2026-03 Initial version: 4 modes, script-based audit, 4-dim + 8-dim scoring

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

latex-thesis-zh

No summary provided by upstream source.

Repository SourceNeeds Review
Research

latex-paper-en

No summary provided by upstream source.

Repository SourceNeeds Review
Research

typst-paper

No summary provided by upstream source.

Repository SourceNeeds Review
Research

industrial-ai-research

No summary provided by upstream source.

Repository SourceNeeds Review