testing-patterns

Universal testing principles including TDD workflow, role-based test matrices, and seed data factory patterns. Use this skill when writing or reviewing tests to ensure consistent structure, meaningful coverage, and maintainable test suites regardless of language or framework.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "testing-patterns" with this command: npx skills add loomcrafthq/skills/loomcrafthq-skills-testing-patterns

Testing Patterns

TDD Workflow

Follow the Red-Green-Refactor cycle for every unit of behavior.

Red → Green → Refactor
  1. Red — Write a failing test that describes the expected behavior.
  2. Green — Write the minimum code to make the test pass.
  3. Refactor — Clean up the implementation without changing behavior. Tests stay green.

Never skip the Red step. If you write code before the test, you don't know if the test actually verifies anything.

Test Structure — Arrange, Act, Assert

Every test should have three clearly separated sections.

// Arrange — set up the preconditions
[create test data, configure mocks, initialize state]

// Act — execute the behavior under test
[call the function, trigger the action]

// Assert — verify the outcome
[check return values, verify side effects, assert state changes]

Rules

  • One Act per test. If you have multiple acts, you have multiple tests.
  • Keep Arrange minimal. Only set up what this specific test needs.
  • Assert outcomes, not implementation. Test what happened, not how it happened internally.

Role-Based Test Matrix

For any feature with access control, test every role explicitly.

ScenarioPUBLIC (unauthenticated)USER (authenticated)ADMIN
Read own data401 Unauthorized200 OK200 OK
Read other's data401 Unauthorized403 Forbidden200 OK
Create resource401 Unauthorized201 Created201 Created
Update own resource401 Unauthorized200 OK200 OK
Update other's resource401 Unauthorized403 Forbidden200 OK
Delete resource401 Unauthorized403 Forbidden200 OK

Adapt the matrix to your application's roles. The key principle is: every role-action combination is an explicit test case, not an assumption.

What to Test

Do Test

  • Business logic and services — the core of your application
  • Edge cases — empty inputs, boundary values, nulls, maximum lengths
  • Error paths — what happens when things fail (invalid input, missing data, downstream errors)
  • Authorization rules — every role-action combination (see matrix above)
  • State transitions — valid transitions succeed, invalid ones are rejected
  • Data transformations — input-to-output mapping for pure functions

Don't Test

  • Framework internals (routing plumbing, ORM query building)
  • Third-party library behavior
  • Trivial getters/setters with no logic
  • Implementation details that could change without affecting behavior

Rule of thumb: Test the contract (inputs → outputs), not the wiring (which internal function called which).

Seed Data Factory Pattern

Create reusable factory functions that produce valid test entities with sensible defaults. Override only what matters for each test.

Principles

  • A factory returns a valid, complete entity by default
  • Each test overrides only the fields relevant to its assertion
  • Factories compose: a factory for an Order can use a factory for a User
  • Factories do NOT touch the database — they produce plain objects. Persistence is a separate concern.

Conceptual Example

createUser(overrides)
  → merge(defaultUser, overrides)
  → return complete User object

// Test: email validation
user = createUser({ email: "invalid" })
result = validateUser(user)
assert result has error on "email"

// Test: admin permissions
admin = createUser({ role: "ADMIN" })
result = canDeleteResource(admin, someResource)
assert result is true

This pattern eliminates brittle test setup, makes tests self-documenting, and prevents coupling between unrelated tests.

Test Naming

Use a consistent naming pattern that describes the scenario and expected outcome.

[unit under test] — [scenario] — [expected result]

Examples:

  • validateEmail — empty string — returns validation error
  • calculateDiscount — order over threshold — applies percentage discount
  • deleteUser — non-admin caller — returns forbidden

Good test names serve as living documentation. If a test fails, the name should tell you what broke without reading the test body.

Test Isolation

  • Each test must be independent. No test should depend on another test's execution or side effects.
  • Reset shared state between tests (database, in-memory stores, global variables).
  • Avoid shared mutable variables across tests — prefer fresh setup in each test.
  • Tests must pass when run individually, in any order, and in parallel.

Do / Don't

DoDon't
Write the test first (Red step)Write tests after the fact to hit a coverage number
Test one behavior per testCram multiple assertions for different behaviors into one test
Assert on outcomes and outputsAssert on internal method calls or execution order
Use factory functions for test dataCopy-paste setup blocks across tests
Name tests as scenario → expected outcomeName tests test1, test2, or should work
Test edge cases and error paths explicitlyOnly test the happy path
Keep tests fast (milliseconds, not seconds)Let slow I/O or network calls into unit tests
Use mocks/stubs for external dependenciesMock the unit under test itself
Clean up state between testsLet tests depend on execution order

Anti-Patterns

Anti-PatternWhy It HurtsFix
Ice cream cone (lots of E2E, few unit tests)Slow feedback, flaky suite, hard to diagnose failuresInvert the pyramid: many unit tests, fewer integration, minimal E2E
Test the mockTest passes but verifies nothing realAssert on outputs and observable side effects, not mock internals
Invisible arrangementShared setup in a distant beforeAll makes tests unreadableInline setup or use factories; each test should be readable on its own
Flaky by designTests depend on timing, network, or random dataEliminate non-determinism; use fixed seeds, stubs, and controlled clocks
Coverage theater100% line coverage with no meaningful assertionsFocus on behavioral coverage; every test should be able to fail meaningfully
Copy-paste testsMaintenance nightmare when the contract changesExtract shared setup into factories; parameterize similar tests

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

ticket-craft

No summary provided by upstream source.

Repository SourceNeeds Review
Security

security-audit

No summary provided by upstream source.

Repository SourceNeeds Review
General

testing-patterns

No summary provided by upstream source.

Repository SourceNeeds Review
General

testing-patterns

No summary provided by upstream source.

Repository SourceNeeds Review