implement

Make failing tests pass with minimal code.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "implement" with this command: npx skills add meta-pytorch/openenv/meta-pytorch-openenv-implement

/implement

Make failing tests pass with minimal code.

Usage

/implement

The implementer will automatically find failing tests from the most recent /write-tests run.

When to Use

  • After /write-tests has created failing tests

  • When you have specific tests that need implementation

  • Never before tests exist

When NOT to Use

  • No failing tests exist (run /write-tests first)

  • You want to add features not covered by tests

  • You want to refactor (use /simplify instead)

What It Does

  • Finds the failing tests from /write-tests

  • Reads tests to understand requirements

  • Writes the minimum code to make tests pass

  • Runs tests after each change

  • Stops when ALL tests pass

Output

The implementer agent will produce:

Implementation Complete

Tests Passed

  • test_client_reset_returns_observation
  • test_client_step_advances_state
  • test_client_handles_invalid_action

Changes Made

FileChange
src/openenv/core/client.pyAdded reset() method
src/openenv/core/client.pyAdded step() method
src/openenv/core/client.pyAdded input validation

Verification

PYTHONPATH=src:envs uv run pytest tests/test_client.py -v All 3 tests passed

Next Steps

  • Mark todo as complete
  • Consider /simplify if change was large
  • Move to next pending todo

Rules

  • Read the failing tests first to understand exactly what's needed

  • Write the MINIMUM code needed to pass tests

  • Run tests after each change to verify progress

  • Do NOT add extra features not covered by tests

  • Do NOT refactor existing code (that's /simplify 's job)

  • Stop when all tests pass

Anti-patterns (NEVER do these)

  • Adding features not covered by tests

  • Refactoring existing code

  • Writing additional tests (that's /write-tests 's job)

  • Over-engineering solutions

  • Adding comments or documentation beyond what's necessary

  • "Improving" code that already works

Completion Criteria

Before returning, verify:

  • ALL tests pass

  • No new test failures introduced

  • Implementation is minimal and focused

Philosophy

The implementer is a "code machine" - it takes test specifications and produces the minimal code to satisfy them. This keeps implementations focused and prevents scope creep.

Think of it as TDD's second phase: Red → Green → Refactor. You are "Green" - make tests pass, nothing more.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

write-tests

No summary provided by upstream source.

Repository SourceNeeds Review
General

pre-submit-pr

No summary provided by upstream source.

Repository SourceNeeds Review
General

simplify

No summary provided by upstream source.

Repository SourceNeeds Review