write-tests

Write failing tests that encode acceptance criteria.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "write-tests" with this command: npx skills add meta-pytorch/openenv/meta-pytorch-openenv-write-tests

/write-tests

Write failing tests that encode acceptance criteria.

Usage

/write-tests /write-tests Add logout button to header

When to Use

  • After creating a todo that requires implementation

  • Before running /implement

  • When you have clear acceptance criteria

When NOT to Use

  • Implementation already exists (tests would pass immediately)

  • You're exploring or prototyping (not TDD mode)

  • Just adding to existing test coverage

What It Does

  • Analyzes the current todo/requirement

  • Reads existing tests to understand patterns

  • Writes test files that verify acceptance criteria

  • Verifies tests FAIL (proves they test something real)

  • Returns test file paths for /implement

Output

The tester agent will produce:

Tests Written

Files Created/Modified

  • tests/test_client.py

Tests Added

TestVerifies
test_client_reset_returns_observationReset returns valid observation
test_client_step_advances_stateStep mutates state correctly
test_client_handles_invalid_actionError handling for bad input

Verification

All tests FAIL as expected (no implementation yet).

Next Step

Run /implement to make these tests pass.

Rules

  • Read existing tests first to understand patterns and conventions

  • Test behavior, not implementation - write from user's perspective

  • Integration tests first, then unit tests if needed

  • Each test verifies ONE thing clearly

  • Run tests to verify they fail before returning

Anti-patterns (NEVER do these)

  • Writing tests that pass without implementation

  • Testing implementation details instead of behavior

  • Writing overly complex test setups

  • Adding implementation code (that's /implement 's job)

  • Writing tests that duplicate existing coverage

Completion Criteria

Before returning, verify:

  • Tests compile/run successfully (pytest can collect them)

  • Tests FAIL (no implementation yet)

  • Test names clearly describe what they verify

  • Tests follow existing project patterns (see tests/ for examples)

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

pre-submit-pr

No summary provided by upstream source.

Repository SourceNeeds Review
General

alignment-review

No summary provided by upstream source.

Repository SourceNeeds Review
General

rfc-check

No summary provided by upstream source.

Repository SourceNeeds Review