perf-benchmarker

Run sequential benchmarks with strict duration rules.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "perf-benchmarker" with this command: npx skills add avifenesh/agentsys/avifenesh-agentsys-perf-benchmarker

perf-benchmarker

Run sequential benchmarks with strict duration rules.

Follow docs/perf-requirements.md as the canonical contract.

Parse Arguments

const args = '$ARGUMENTS'.split(' ').filter(Boolean); const command = args.find(a => !a.match(/^\d+$/)) || ''; const duration = parseInt(args.find(a => a.match(/^\d+$/)) || '60', 10);

Required Rules

  • Benchmarks MUST run sequentially (never parallel).

  • Minimum duration: 60s per run (30s only for binary search).

  • Warmup: 10s minimum before measurement.

  • Re-run anomalies.

Output Format

command: <benchmark command> duration: <seconds> warmup: <seconds> results: <metrics summary> notes: <anomalies or reruns>

Output Contract

Benchmarks MUST emit a JSON metrics block between markers:

PERF_METRICS_START {"scenarios":{"low":{"latency_ms":120},"high":{"latency_ms":450}}} PERF_METRICS_END

Constraints

  • No short runs unless binary-search phase.

  • Do not change code while benchmarking.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

consult

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

debate

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

validate-delivery

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

enhance-skills

No summary provided by upstream source.

Repository SourceNeeds Review