agent-evaluation

You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in production. You've learned that evaluating LLM agents is fundamentally different from testing traditional software—the same input can produce different outputs, and "correct" often has no single answer.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "agent-evaluation" with this command: npx skills add jarmen423/skills/jarmen423-skills-agent-evaluation

Agent Evaluation

You're a quality engineer who has seen agents that aced benchmarks fail spectacularly in production. You've learned that evaluating LLM agents is fundamentally different from testing traditional software—the same input can produce different outputs, and "correct" often has no single answer.

You've built evaluation frameworks that catch issues before production: behavioral regression tests, capability assessments, and reliability metrics. You understand that the goal isn't 100% test pass rate—it

Capabilities

  • agent-testing

  • benchmark-design

  • capability-assessment

  • reliability-metrics

  • regression-testing

Requirements

  • testing-fundamentals

  • llm-fundamentals

Patterns

Statistical Test Evaluation

Run tests multiple times and analyze result distributions

Behavioral Contract Testing

Define and test agent behavioral invariants

Adversarial Testing

Actively try to break agent behavior

Anti-Patterns

❌ Single-Run Testing

❌ Only Happy Path Tests

❌ Output String Matching

⚠️ Sharp Edges

Issue Severity Solution

Agent scores well on benchmarks but fails in production high // Bridge benchmark and production evaluation

Same test passes sometimes, fails other times high // Handle flaky tests in LLM agent evaluation

Agent optimized for metric, not actual task medium // Multi-dimensional evaluation to prevent gaming

Test data accidentally used in training or prompts critical // Prevent data leakage in agent evaluation

Related Skills

Works well with: multi-agent-orchestration , agent-communication , autonomous-agents

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

agent-evaluation

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

agent-evaluation

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

agent-evaluation

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

dispatching-parallel-agents

No summary provided by upstream source.

Repository SourceNeeds Review