test-driven-development

Use this skill when writing new features, fixing bugs, or adding test coverage. Enforces Red-Green-Refactor — write the test first, then the code. Trigger on "add tests", "write tests first", "TDD", "test this feature", "fix this bug" (reproduce with a failing test first), or when starting any new implementation. Prevents testing anti-patterns like over-mocking, test-per-method, and tests that pass but verify nothing.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "test-driven-development" with this command: npx skills add carvalab/k-skills/carvalab-k-skills-test-driven-development

Test-Driven Development

Write tests before code. The test is the specification. If you can't write a test, you don't understand the requirement.

Related Skills:

  • kavak-documentation - Query for Kavak-specific testing patterns, kbroker event testing, STS mocking
  • Use kavak-platform/platform_docs_search MCP tool for testing best practices at Kavak

Quick Start

# 1. Write failing test first
# 2. Run to see it fail (RED)
# 3. Write minimal code to pass (GREEN)
# 4. Refactor while tests pass (REFACTOR)
# 5. Repeat

Test commands by language:

LanguageRun TestsWatch Mode
Gogo test ./...go test ./... -v
Node/TSnpm testnpm test -- --watch
Pythonpytestpytest-watch
Java./mvnw test./mvnw test -Dtest=*

The Red-Green-Refactor Cycle

1. RED: Write Failing Test

- Write ONE test for the next piece of behavior
- Test must fail for the RIGHT reason
- Use descriptive names: should_calculate_total_with_tax
- Follow Arrange-Act-Assert structure

2. GREEN: Make It Pass

- Write MINIMAL code to pass the test
- Don't optimize, don't refactor, don't add features
- "Fake it till you make it" is valid
- The goal is GREEN, not perfect

3. REFACTOR: Improve Design

- Clean up code while tests stay green
- Remove duplication
- Improve names
- Extract methods/functions
- Run tests after EVERY change

Common Rationalizations (Resist Them)

RationalizationCounter
"Let me just write one more method"Stop. Test what exists first
"I'll add tests after"You won't. Tests written after verify nothing
"It's too simple to test"Simple now, complex later. Test it
"I'll refactor tests later"Refactor production code, not test structure
"This is just scaffolding"Scaffolding becomes foundation. Test it

Anti-Patterns (What NOT to Do)

Anti-PatternProblemFix
The LiarTest passes but tests nothingAssert actual behavior
The MockeryOver-mocking hides real bugsMock boundaries only
Excessive Setup50 lines setup, 2 lines testSimplify SUT or use builders
The Slow PokeTests take minutesIsolate, mock I/O
The Local HeroPasses locally, fails in CINo env dependencies
Test-per-Method1:1 test-to-methodTest behaviors, not methods

Verification Checklist

Before committing, verify your tests:

[ ] Test fails when behavior is removed?
[ ] Test name describes the behavior?
[ ] Arrange-Act-Assert structure clear?
[ ] No test-only code in production?
[ ] Mocks verify behavior, not implementation?
[ ] Edge cases covered?

When TDD Is Mandatory

  • New features (write test first)
  • Bug fixes (write failing test that reproduces bug)
  • Refactoring (tests protect behavior)
  • API changes (contract tests first)

When to Adapt TDD

  • Exploratory/spike work (delete code after, then TDD)
  • UI prototyping (test logic, not layout)
  • Legacy code (add tests before changing)

Test Naming Convention

should_[expected_behavior]_when_[condition]

Examples:
- should_return_zero_when_cart_is_empty
- should_throw_error_when_user_not_found
- should_apply_discount_when_coupon_valid

References

ReferencePurpose
references/red-green-refactor.mdDetailed cycle walkthrough
references/anti-patterns.mdFull anti-pattern catalog
references/examples-go.mdGo TDD examples
references/examples-node.mdNode/TypeScript TDD examples
references/examples-python.mdPython TDD examples
references/examples-java.mdJava TDD examples
references/verification-checklist.mdPre-commit verification
references/testing-boundaries.mdWhat to mock, what not to mock

Best Practices

  1. One assertion per test - Multiple assertions hide failures
  2. Test behavior, not implementation - Tests survive refactoring
  3. Isolated tests - No shared state between tests
  4. Fast tests - Under 100ms per unit test
  5. Deterministic - Same result every run
  6. Self-documenting - Test name = specification

Principle: If you can't write a test for it, you don't understand what it should do. The test IS the specification.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

code-review

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

code-simplifier

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

frontend-development

No summary provided by upstream source.

Repository SourceNeeds Review