test-harness

Generates comprehensive pytest test suites: happy path, edge cases, error conditions, fixture scaffolding, mock strategy, and async patterns. Analyzes function signatures, dependency chains, and complexity hotspots to produce runnable, parametrized test files. Triggers on: "generate tests", "write tests for", "test this function", "test this class", "create test suite", "what tests should I write", "pytest for", "unit tests for", "mock strategy for", "how to test", "test coverage for", "write a test file". Use this skill when given a Python function, class, or module and asked to produce tests.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "test-harness" with this command: npx skills add mathews-tom/praxis-skills/mathews-tom-praxis-skills-test-harness

Test Harness

Systematic test suite generation that transforms source code into comprehensive, runnable pytest files. Analyzes function signatures, dependency graphs, and complexity hotspots to produce tests covering happy paths, boundary conditions, error states, and async flows — with properly scoped fixtures and focused mocks.

Reference Files

FileContentsLoad When
references/pytest-patterns.mdFixture scopes, parametrize, marks, conftest layout, built-in fixturesAlways
references/mock-strategies.mdMock decision tree, patch boundaries, assertions, anti-patternsTarget has external dependencies
references/async-testing.mdpytest-asyncio modes, event loop fixtures, async mockingTarget contains async code
references/fixture-design.mdFactory fixtures, yield teardown, scope selection, compositionTest requires non-trivial setup
references/coverage-targets.mdThreshold table, branch vs line, pytest-cov config, exclusion patternsCoverage assessment requested

Prerequisites

  • pytest >= 7.0
  • Python >= 3.10
  • pytest-asyncio — required only when generating async tests
  • pytest-mock — optional, provides mocker fixture as alternative to unittest.mock

Workflow

Phase 1: Reconnaissance

Before writing a single test, build a model of the target code:

  1. Identify scope — What functions, classes, or modules need tests? If unspecified, check for recent modifications: git diff --name-only HEAD~5
  2. Read function signatures — Parameters, types, return types, defaults. Every parameter is a test dimension.
  3. Map dependencies — Which calls go to external systems (DB, API, filesystem, clock)? These are mock candidates.
  4. Detect complexity hotspots — Functions with high branch counts, deep nesting, or multiple return paths need more test cases.
  5. Check existing tests — If tests already exist, understand what they cover. Do not duplicate; extend.
  6. Read project conventions — Check CLAUDE.md, conftest.py, pytest.ini/pyproject.toml for fixtures, markers, and test organization patterns already in use.

Phase 2: Test Case Enumeration

For each function under test, enumerate cases across four categories:

CategoryWhat to TestExample
Happy pathExpected inputs produce expected outputsadd(2, 3) returns 5
BoundaryEdge values at limits of valid inputEmpty string, zero, max int, single element
ErrorInvalid inputs trigger proper exceptionsNone where str expected, negative index
StateState transitions produce correct side effectsObject moves from pending to active

For each case, note:

  • Input values (concrete, not abstract)
  • Expected output or exception
  • Required setup (fixtures)
  • Required mocks (external calls to suppress)

Parametrize cases that share the same test logic but differ only in input/output values.

Phase 3: Fixture Design

  1. Identify shared setup — If 3+ tests need the same object, extract a fixture.

  2. Select scope — Use the narrowest scope that avoids redundant setup:

    ScopeUse WhenExample
    functionDefault. Each test gets fresh stateMost unit tests
    classTests within a class share expensive setupDB connection per test class
    moduleAll tests in a file share setupLoaded config file
    sessionEntire test run shares setupDocker container startup
  3. Design teardown — Use yield fixtures when cleanup is needed. Never leave side effects (temp files, DB rows, monkey-patches) after a test.

  4. Identify conftest candidates — Fixtures used across multiple test files belong in conftest.py. Fixtures used in one file stay in that file.

Phase 4: Mock Strategy

  1. Decide what to mock — Mock external dependencies only:

    • Network calls (API, database, message queues)
    • Filesystem operations (when testing logic, not I/O)
    • Time-dependent behavior (datetime.now, time.sleep)
    • Random/non-deterministic behavior
  2. Decide what NOT to mock — Never mock:

    • The function under test
    • Pure functions called by the target (test them through the target)
    • Data structures and value objects
  3. Choose mock level — Patch at the import boundary of the module under test, not at the definition site. @patch('mymodule.requests.get'), not @patch('requests.get').

  4. Add mock assertions — Every mock should assert it was called with expected arguments and the expected number of times. Mocks without assertions are coverage holes.

Phase 5: Output

Generate the test file following this structure:

  1. Imports (pytest, mocks, target module)
  2. Constants and test data
  3. Fixtures (ordered by scope: session > module > class > function)
  4. Test classes or functions grouped by target function
  5. Parametrized tests where applicable

Output Format

# tests/test_{module}.py

import pytest
from unittest.mock import Mock, patch, MagicMock

from {module} import {target_function, TargetClass}


# ============================================================
# Fixtures
# ============================================================

@pytest.fixture
def valid_input():
    """Standard valid input for happy path tests."""
    return {concrete values}


@pytest.fixture
def mock_database():
    """Mock database connection."""
    with patch("{module}.db_connection") as mock_db:
        mock_db.query.return_value = [{expected data}]
        yield mock_db


# ============================================================
# {target_function} Tests
# ============================================================

class TestTargetFunction:
    """Tests for {target_function}."""

    def test_happy_path(self, valid_input):
        """Returns expected result for valid input."""
        result = target_function(valid_input)
        assert result == {expected}

    @pytest.mark.parametrize(
        "input_val, expected",
        [
            ({boundary_1}, {expected_1}),
            ({boundary_2}, {expected_2}),
            ({boundary_3}, {expected_3}),
        ],
        ids=["empty", "single", "maximum"],
    )
    def test_boundary_conditions(self, input_val, expected):
        """Handles boundary inputs correctly."""
        assert target_function(input_val) == expected

    def test_invalid_input_raises(self):
        """Raises TypeError for invalid input."""
        with pytest.raises(TypeError, match="expected str"):
            target_function(None)

    def test_external_call(self, mock_database):
        """Calls database with correct query."""
        target_function("lookup_key")
        mock_database.query.assert_called_once_with("SELECT * FROM t WHERE key = %s", ("lookup_key",))

Configuring Scope

ModeScopeDepthWhen to Use
quickSingle functionHappy path + 1 error caseRapid iteration, TDD red-green cycle
standardFile or classHappy + boundary + error + mocksDefault for most requests
comprehensiveModule or packageAll categories + async + parametrized matrixPre-release, critical path code

Calibration Rules

  1. Test isolation is non-negotiable. Every test must pass when run alone and in any order. No test may depend on the side effects of another test.
  2. Mock discipline. Mock external dependencies, not internal logic. Over-mocking produces tests that pass when the code is broken. Under-mocking produces tests that fail when the network is down.
  3. Concrete over abstract. Test data must be concrete values, not placeholders. "alice@example.com" not "test_email". 42 not "some_number". Concrete values catch type mismatches that abstract placeholders mask.
  4. One assertion focus per test. A test should verify one behavior. Multiple assertions are acceptable when they verify different aspects of the same behavior (e.g., return value AND side effect), but not when they verify unrelated behaviors.
  5. Parametrize, don't duplicate. If two tests differ only in input/output values, combine them with @pytest.mark.parametrize. Use ids for readable test names.
  6. Match project conventions. If the project uses conftest.py fixtures, class-based tests, or specific markers, follow those patterns. Do not introduce a conflicting test style.

Error Handling

ProblemResolution
Target function has no type hintsInfer types from usage patterns, default values, and docstrings. Note uncertainty in test docstring.
Target has deeply nested dependenciesMock at the nearest boundary to the function under test. Do not mock transitive dependencies individually.
No existing test infrastructure (no conftest, no pytest config)Generate a minimal conftest.py alongside the test file. Note the addition in output.
Target code is untestable (global state, hidden dependencies)Flag the design issue in the output. Generate tests for what is testable. Suggest refactoring to improve testability.
Async code detected but pytest-asyncio not installedNote the dependency requirement. Generate async test stubs with @pytest.mark.asyncio and instruct user to install.
Target module cannot be importedReport the import error. Do not generate tests for unimportable code.

When NOT to Generate Tests

Push back if:

  • The code is auto-generated (protobuf, OpenAPI client, ORM models) — test the generator or the schema, not the output
  • The request is for UI/E2E tests — this skill generates unit and integration tests only
  • The code has no clear behavior to test (pure configuration, constant definitions)
  • The user wants tests for third-party library code — test your usage of the library, not the library itself

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

manuscript-review

No summary provided by upstream source.

Repository SourceNeeds Review
General

html-presentation

No summary provided by upstream source.

Repository SourceNeeds Review
General

concept-to-image

No summary provided by upstream source.

Repository SourceNeeds Review
General

md-to-pdf

No summary provided by upstream source.

Repository SourceNeeds Review