test-review

- Required TodoWrite Items

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "test-review" with this command: npx skills add athola/claude-night-market/athola-claude-night-market-test-review

Table of Contents

  • Quick Start

  • When to Use

  • Required TodoWrite Items

  • Progressive Loading

  • Workflow

  • Step 1: Detect Languages (test-review:languages-detected )

  • Step 2: Inventory Coverage (test-review:coverage-inventoried )

  • Step 3: Assess Scenario Quality (test-review:scenario-quality )

  • Step 4: Plan Remediation (test-review:gap-remediation )

  • Step 5: Log Evidence (test-review:evidence-logged )

  • Test Quality Checklist (Condensed)

  • Output Format

  • Summary

  • Framework Detection

  • Coverage Analysis

  • Quality Issues

  • Remediation Plan

  • Recommendation

  • Integration Notes

  • Exit Criteria

Test Review Workflow

Evaluate and improve test suites with TDD/BDD rigor.

Quick Start

/test-review

Verification: Run pytest -v to verify tests pass.

When To Use

  • Reviewing test suite quality

  • Analyzing coverage gaps

  • Before major releases

  • After test failures

  • Planning test improvements

When NOT To Use

  • Writing new tests - use parseltongue:python-testing

  • Updating existing tests - use sanctum:test-updates

Required TodoWrite Items

  • test-review:languages-detected

  • test-review:coverage-inventoried

  • test-review:scenario-quality

  • test-review:gap-remediation

  • test-review:evidence-logged

Progressive Loading

Load modules as needed based on review depth:

  • Basic review: Core workflow (this file)

  • Framework detection: Load modules/framework-detection.md

  • Coverage analysis: Load modules/coverage-analysis.md

  • Quality assessment: Load modules/scenario-quality.md

  • Remediation planning: Load modules/remediation-planning.md

Workflow

Step 1: Detect Languages (test-review:languages-detected )

Identify testing frameworks and version constraints. → See: modules/framework-detection.md

Quick check:

find . -maxdepth 2 -name "Cargo.toml" -o -name "pyproject.toml" -o -name "package.json" -o -name "go.mod"

Verification: Run the command with --help flag to verify availability.

Step 2: Inventory Coverage (test-review:coverage-inventoried )

Run coverage tools and identify gaps. → See: modules/coverage-analysis.md

Quick check:

git diff --name-only | rg 'tests|spec|feature'

Verification: Run pytest -v to verify tests pass.

Step 3: Assess Scenario Quality (test-review:scenario-quality )

Evaluate test quality using BDD patterns and assertion checks. → See: modules/scenario-quality.md

Focus on:

  • Given/When/Then clarity

  • Assertion specificity

  • Anti-patterns (dead waits, mocking internals, repeated boilerplate)

Step 4: Plan Remediation (test-review:gap-remediation )

Create concrete improvement plan with owners and dates. → See: modules/remediation-planning.md

Step 5: Log Evidence (test-review:evidence-logged )

Record executed commands, outputs, and recommendations. → See: imbue:proof-of-work

Test Quality Checklist (Condensed)

  • Clear test structure (Arrange-Act-Assert)

  • Critical paths covered (auth, validation, errors)

  • Specific assertions with context

  • No flaky tests (dead waits, order dependencies)

  • Reusable fixtures/factories

Output Format

Summary

[Brief assessment]

Framework Detection

  • Languages: [list] | Frameworks: [list] | Versions: [constraints]

Coverage Analysis

  • Overall: X% | Critical: X% | Gaps: [list]

Quality Issues

[Q1] [Issue] - Location - Fix

Remediation Plan

  1. [Action] - Owner - Date

Recommendation

Approve / Approve with actions / Block

Verification: Run the command with --help flag to verify availability.

Integration Notes

  • Use imbue:proof-of-work for reproducible evidence capture

  • Reference imbue:diff-analysis for risk assessment

  • Format output using imbue:structured-output patterns

Exit Criteria

  • Frameworks detected and documented

  • Coverage analyzed and gaps identified

  • Scenario quality assessed

  • Remediation plan created with owners and dates

  • Evidence logged with citations

Troubleshooting

Common Issues

Tests not discovered Ensure test files match pattern test_*.py or *_test.py . Run pytest --collect-only to verify.

Import errors Check that the module being tested is in PYTHONPATH or install with pip install -e .

Async tests failing Install pytest-asyncio and decorate test functions with @pytest.mark.asyncio

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

project-planning

No summary provided by upstream source.

Repository SourceNeeds Review
General

project-brainstorming

No summary provided by upstream source.

Repository SourceNeeds Review
General

doc-generator

No summary provided by upstream source.

Repository SourceNeeds Review
General

project-specification

No summary provided by upstream source.

Repository SourceNeeds Review