testing anti-patterns

Testing Anti-Patterns

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "testing anti-patterns" with this command: npx skills add obra/superpowers-skills/obra-superpowers-skills-testing-anti-patterns

Testing Anti-Patterns

Overview

Tests must verify real behavior, not mock behavior. Mocks are a means to isolate, not the thing being tested.

Core principle: Test what the code does, not what the mocks do.

Following strict TDD prevents these anti-patterns.

The Iron Laws

  1. NEVER test mock behavior
  2. NEVER add test-only methods to production classes
  3. NEVER mock without understanding dependencies

Anti-Pattern 1: Testing Mock Behavior

The violation:

// ❌ BAD: Testing that the mock exists test('renders sidebar', () => { render(<Page />); expect(screen.getByTestId('sidebar-mock')).toBeInTheDocument(); });

Why this is wrong:

  • You're verifying the mock works, not that the component works

  • Test passes when mock is present, fails when it's not

  • Tells you nothing about real behavior

your human partner's correction: "Are we testing the behavior of a mock?"

The fix:

// ✅ GOOD: Test real component or don't mock it test('renders sidebar', () => { render(<Page />); // Don't mock sidebar expect(screen.getByRole('navigation')).toBeInTheDocument(); });

// OR if sidebar must be mocked for isolation: // Don't assert on the mock - test Page's behavior with sidebar present

Gate Function

BEFORE asserting on any mock element: Ask: "Am I testing real component behavior or just mock existence?"

IF testing mock existence: STOP - Delete the assertion or unmock the component

Test real behavior instead

Anti-Pattern 2: Test-Only Methods in Production

The violation:

// ❌ BAD: destroy() only used in tests class Session { async destroy() { // Looks like production API! await this._workspaceManager?.destroyWorkspace(this.id); // ... cleanup } }

// In tests afterEach(() => session.destroy());

Why this is wrong:

  • Production class polluted with test-only code

  • Dangerous if accidentally called in production

  • Violates YAGNI and separation of concerns

  • Confuses object lifecycle with entity lifecycle

The fix:

// ✅ GOOD: Test utilities handle test cleanup // Session has no destroy() - it's stateless in production

// In test-utils/ export async function cleanupSession(session: Session) { const workspace = session.getWorkspaceInfo(); if (workspace) { await workspaceManager.destroyWorkspace(workspace.id); } }

// In tests afterEach(() => cleanupSession(session));

Gate Function

BEFORE adding any method to production class: Ask: "Is this only used by tests?"

IF yes: STOP - Don't add it Put it in test utilities instead

Ask: "Does this class own this resource's lifecycle?"

IF no: STOP - Wrong class for this method

Anti-Pattern 3: Mocking Without Understanding

The violation:

// ❌ BAD: Mock breaks test logic test('detects duplicate server', () => { // Mock prevents config write that test depends on! vi.mock('ToolCatalog', () => ({ discoverAndCacheTools: vi.fn().mockResolvedValue(undefined) }));

await addServer(config); await addServer(config); // Should throw - but won't! });

Why this is wrong:

  • Mocked method had side effect test depended on (writing config)

  • Over-mocking to "be safe" breaks actual behavior

  • Test passes for wrong reason or fails mysteriously

The fix:

// ✅ GOOD: Mock at correct level test('detects duplicate server', () => { // Mock the slow part, preserve behavior test needs vi.mock('MCPServerManager'); // Just mock slow server startup

await addServer(config); // Config written await addServer(config); // Duplicate detected ✓ });

Gate Function

BEFORE mocking any method: STOP - Don't mock yet

  1. Ask: "What side effects does the real method have?"
  2. Ask: "Does this test depend on any of those side effects?"
  3. Ask: "Do I fully understand what this test needs?"

IF depends on side effects: Mock at lower level (the actual slow/external operation) OR use test doubles that preserve necessary behavior NOT the high-level method the test depends on

IF unsure what test depends on: Run test with real implementation FIRST Observe what actually needs to happen THEN add minimal mocking at the right level

Red flags: - "I'll mock this to be safe" - "This might be slow, better mock it" - Mocking without understanding the dependency chain

Anti-Pattern 4: Incomplete Mocks

The violation:

// ❌ BAD: Partial mock - only fields you think you need const mockResponse = { status: 'success', data: { userId: '123', name: 'Alice' } // Missing: metadata that downstream code uses };

// Later: breaks when code accesses response.metadata.requestId

Why this is wrong:

  • Partial mocks hide structural assumptions - You only mocked fields you know about

  • Downstream code may depend on fields you didn't include - Silent failures

  • Tests pass but integration fails - Mock incomplete, real API complete

  • False confidence - Test proves nothing about real behavior

The Iron Rule: Mock the COMPLETE data structure as it exists in reality, not just fields your immediate test uses.

The fix:

// ✅ GOOD: Mirror real API completeness const mockResponse = { status: 'success', data: { userId: '123', name: 'Alice' }, metadata: { requestId: 'req-789', timestamp: 1234567890 } // All fields real API returns };

Gate Function

BEFORE creating mock responses: Check: "What fields does the real API response contain?"

Actions: 1. Examine actual API response from docs/examples 2. Include ALL fields system might consume downstream 3. Verify mock matches real response schema completely

Critical: If you're creating a mock, you must understand the ENTIRE structure Partial mocks fail silently when code depends on omitted fields

If uncertain: Include all documented fields

Anti-Pattern 5: Integration Tests as Afterthought

The violation:

✅ Implementation complete ❌ No tests written "Ready for testing"

Why this is wrong:

  • Testing is part of implementation, not optional follow-up

  • TDD would have caught this

  • Can't claim complete without tests

The fix:

TDD cycle:

  1. Write failing test
  2. Implement to pass
  3. Refactor
  4. THEN claim complete

When Mocks Become Too Complex

Warning signs:

  • Mock setup longer than test logic

  • Mocking everything to make test pass

  • Mocks missing methods real components have

  • Test breaks when mock changes

your human partner's question: "Do we need to be using a mock here?"

Consider: Integration tests with real components often simpler than complex mocks

TDD Prevents These Anti-Patterns

Why TDD helps:

  • Write test first → Forces you to think about what you're actually testing

  • Watch it fail → Confirms test tests real behavior, not mocks

  • Minimal implementation → No test-only methods creep in

  • Real dependencies → You see what the test actually needs before mocking

If you're testing mock behavior, you violated TDD - you added mocks without watching test fail against real code first.

Quick Reference

Anti-Pattern Fix

Assert on mock elements Test real component or unmock it

Test-only methods in production Move to test utilities

Mock without understanding Understand dependencies first, mock minimally

Incomplete mocks Mirror real API completely

Tests as afterthought TDD - tests first

Over-complex mocks Consider integration tests

Red Flags

  • Assertion checks for *-mock test IDs

  • Methods only called in test files

  • Mock setup is >50% of test

  • Test fails when you remove mock

  • Can't explain why mock is needed

  • Mocking "just to be safe"

The Bottom Line

Mocks are tools to isolate, not things to test.

If TDD reveals you're testing mock behavior, you've gone wrong.

Fix: Test real behavior or question why you're mocking at all.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

systematic-debugging

Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes

Repository SourceNeeds Review
85.6K31.1K
obra
General

executing-plans

Use when you have a written implementation plan to execute in a separate session with review checkpoints

Repository SourceNeeds Review
85.6K24.3K
obra
General

verification-before-completion

Use when about to claim work is complete, fixed, or passing, before committing or creating PRs - requires running verification commands and confirming output before making any success claims; evidence before assertions always

Repository SourceNeeds Review
85.6K18.7K
obra