minimal-run-and-audit

Sub-skill for the execution-evidence and reporting phase of README-first AI repo reproduction. Use when the task is specifically to capture or normalize evidence from the selected smoke test or documented inference or evaluation command and write standardized `repro_outputs/` files including patch notes when repository files changed. Do not use for initial repo intake, generic environment setup, paper lookup, target selection, or end-to-end orchestration by itself.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "minimal-run-and-audit" with this command: npx skills add lllllllama/ai-paper-reproduction-skill/lllllllama-ai-paper-reproduction-skill-minimal-run-and-audit

minimal-run-and-audit

When to apply

  • After a reproduction target and setup plan exist.
  • When the main skill needs execution evidence and normalized outputs.
  • When a smoke test, documented inference run, documented evaluation run, or training startup verification is appropriate.
  • When the user already knows what command should be attempted and wants execution plus reporting only.

When not to apply

  • During initial repo scanning.
  • When environment or assets are still undefined enough to make execution meaningless.
  • When the task is a literature lookup rather than repository execution.
  • When the user is still deciding which reproduction target should count as the main run.

Clear boundaries

  • This skill owns normalized reporting for an attempted command.
  • It may receive execution evidence from the main skill or a thin helper.
  • It does not choose the overall target on its own.
  • It does not perform broad paper analysis.
  • It should not normalize risky code edits into acceptable practice.

Input expectations

  • selected reproduction goal
  • runnable commands or smoke commands
  • environment and asset assumptions
  • optional patch metadata

Output expectations

  • execution result summary
  • standardized repro_outputs/ files
  • clear distinction between verified, partial, and blocked states
  • PATCHES.md when repo files changed

Notes

Use references/reporting-policy.md and scripts/write_outputs.py.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

ai-paper-reproduction

No summary provided by upstream source.

Repository SourceNeeds Review
Research

env-and-assets-bootstrap

No summary provided by upstream source.

Repository SourceNeeds Review
Research

repo-intake-and-plan

No summary provided by upstream source.

Repository SourceNeeds Review
Research

paper-context-resolver

No summary provided by upstream source.

Repository SourceNeeds Review