frontend-testing

This skill enables Claude to generate high-quality, comprehensive frontend tests following established conventions and best practices.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "frontend-testing" with this command: npx skills add pageai-pro/ralph-loop/pageai-pro-ralph-loop-frontend-testing

This skill enables Claude to generate high-quality, comprehensive frontend tests following established conventions and best practices.

When to Apply This Skill

Apply this skill when the user:

  • Asks to write tests for a component, hook, or utility

  • Asks to review existing tests for completeness

  • Mentions Vitest, React Testing Library, RTL, or spec files

  • Requests test coverage improvement

  • Mentions testing, unit tests, or integration tests for frontend code

  • Wants to understand testing patterns in the frontend codebase

Do NOT apply when:

  • User is asking about E2E tests (Playwright)

  • User is only asking conceptual questions without code context

Quick Reference

Tech Stack

Tool Version Purpose

Vitest 4+ Test runner

React Testing Library 16+ Component testing

jsdom

Test environment

TypeScript 5+ Type safety

Note: keep this list up to date with the project's dependencies.

Key Commands

Always prefer running specific tests over the entire suite for faster feedback:

Run ALL tests (avoid during development)

npx vitest run

✅ PREFERRED: Run specific file

npx vitest run src/components/Button.spec.tsx

✅ PREFERRED: Run tests matching a pattern

npx vitest run --grep "Button" npx vitest run -t "should render"

✅ PREFERRED: Run specific describe block

npx vitest run --grep "Button > Rendering"

✅ Run tests in a directory

npx vitest run src/components/

✅ Run single test by name

npx vitest run -t "should disable button when loading"

Watch Mode (Background Testing)

Use watch mode for efficient iterative development:

✅ Watch mode - reruns on file changes

npx vitest

✅ Watch specific file

npx vitest src/components/Button.spec.tsx

✅ Watch tests matching pattern

npx vitest --grep "Button"

File Naming

  • Test files: ComponentName.spec.tsx (same directory as component)

Test Structure Template

import { render, screen, fireEvent, waitFor } from '@testing-library/react' import Component from './index'

// ✅ Import real project components (DO NOT mock these) // import Loading from '@/app/components/base/loading' // import { ChildComponent } from './child-component'

// ✅ Mock external dependencies only vi.mock('@/service/api') vi.mock('next/navigation', () => ({ useRouter: () => ({ push: vi.fn() }), usePathname: () => '/test', }))

// Shared state for mocks (if needed) let mockSharedState = false

describe('ComponentName', () => { beforeEach(() => { vi.clearAllMocks() // ✅ Reset mocks BEFORE each test mockSharedState = false // ✅ Reset shared state })

// Rendering tests (REQUIRED) describe('Rendering', () => { it('should render without crashing', () => { // Arrange const props = { title: 'Test' }

  // Act
  render(<Component {...props} />)

  // Assert
  expect(screen.getByText('Test')).toBeInTheDocument()
})

})

// Props tests (REQUIRED) describe('Props', () => { it('should apply custom className', () => { render(<Component className="custom" />) expect(screen.getByRole('button')).toHaveClass('custom') }) })

// User Interactions describe('User Interactions', () => { it('should handle click events', () => { const handleClick = vi.fn() render(<Component onClick={handleClick} />)

  fireEvent.click(screen.getByRole('button'))

  expect(handleClick).toHaveBeenCalledTimes(1)
})

})

// Edge Cases (REQUIRED) describe('Edge Cases', () => { it('should handle null data', () => { render(<Component data={null} />) expect(screen.getByText(/no data/i)).toBeInTheDocument() })

it('should handle empty array', () => {
  render(&#x3C;Component items={[]} />)
  expect(screen.getByText(/empty/i)).toBeInTheDocument()
})

}) })

Testing Workflow (CRITICAL)

⚠️ Incremental Approach Required

NEVER generate all test files at once. For complex components or multi-file directories:

  • Analyze & Plan: List all files, order by complexity (simple → complex)

  • Process ONE at a time: Write test → Run test → Fix if needed → Next

  • Verify before proceeding: Do NOT continue to next file until current passes

For each file: ┌────────────────────────────────────────────────┐ │ 1. Write test │ │ 2. Run: npm run test-unit <file>.spec.tsx │ │ 3. PASS? → Mark complete, next file │ │ FAIL? → Fix first, then continue │ └────────────────────────────────────────────────┘

Complexity-Based Order

Process in this order for multi-file testing:

  • 🟢 Utility functions (simplest)

  • 🟢 Custom hooks

  • 🟡 Simple components (presentational)

  • 🟡 Medium components (state, effects)

  • 🔴 Complex components (API, routing)

  • 🔴 Integration tests (index files - last)

When to Refactor First

  • Medium Complexity: Break into smaller pieces before testing

  • 500+ lines: Consider splitting before testing

  • Many dependencies: Extract logic into hooks first

Testing Strategy

Path-Level Testing (Directory Testing)

When assigned to test a directory/path, test ALL content within that path:

  • Test all components, hooks, utilities in the directory (not just index file)

  • Use incremental approach: one file at a time, verify each before proceeding

  • Goal: 100% coverage of ALL files in the directory

Integration Testing First

Prefer integration testing when writing tests for a directory:

  • ✅ Import real project components directly (including base components and siblings)

  • ✅ Only mock: API services (@/service/* ), next/navigation , complex context providers

  • ❌ DO NOT mock base components (@/app/components/base/* )

  • ❌ DO NOT mock sibling/child components in the same directory

Core Principles

  1. AAA Pattern (Arrange-Act-Assert)

Every test should clearly separate:

  • Arrange: Setup test data and render component

  • Act: Perform user actions

  • Assert: Verify expected outcomes

  1. Black-Box Testing
  • Test observable behavior, not implementation details

  • Use semantic queries (getByRole, getByLabelText)

  • Avoid testing internal state directly

  • Prefer pattern matching over hardcoded strings in assertions:

// ❌ Avoid: hardcoded text assertions expect(screen.getByText('Loading...')).toBeInTheDocument()

// ✅ Better: role-based queries expect(screen.getByRole('status')).toBeInTheDocument()

// ✅ Better: pattern matching expect(screen.getByText(/loading/i)).toBeInTheDocument()

  1. Single Behavior Per Test

Each test verifies ONE user-observable behavior:

// ✅ Good: One behavior it('should disable button when loading', () => { render(<Button loading />) expect(screen.getByRole('button')).toBeDisabled() })

// ❌ Bad: Multiple behaviors it('should handle loading state', () => { render(<Button loading />) expect(screen.getByRole('button')).toBeDisabled() expect(screen.getByText('Loading...')).toBeInTheDocument() expect(screen.getByRole('button')).toHaveClass('loading') })

  1. Semantic Naming

Use should <behavior> when <condition> :

it('should show error message when validation fails') it('should call onSubmit when form is valid') it('should disable input when isReadOnly is true')

Required Test Scenarios

Always Required (All Components)

  • Rendering: Component renders without crashing

  • Props: Required props, optional props, default values

  • Edge Cases: null, undefined, empty values, boundary conditions

Conditional (When Present)

Feature Test Focus

useState

Initial state, transitions, cleanup

useEffect

Execution, dependencies, cleanup

Event handlers All onClick, onChange, onSubmit, keyboard

API calls Loading, success, error states

Routing Navigation, params, query strings

useCallback /useMemo

Referential equality

Context Provider values, consumer behavior

Forms Validation, submission, error display

Coverage Goals (Per File)

For each test file generated, aim for:

  • ✅ 100% function coverage

  • ✅ 100% statement coverage

  • ✅ >95% branch coverage

  • ✅ >95% line coverage

Note: For multi-file directories, process one file at a time with full coverage each. See references/workflow.md .

Detailed Guides

For more detailed information, refer to:

  • references/workflow.md

  • Incremental testing workflow (MUST READ for multi-file testing)

  • references/mocking.md

  • Mock patterns and best practices

  • references/async-testing.md

  • Async operations and API calls

  • references/common-patterns.md

  • Frequently used testing patterns

  • references/checklist.md

  • Test generation checklist and validation steps

Project Configuration

  • vitest.config.ts

  • Vitest configuration

  • vitest.setup.ts

  • Test environment setup

  • Modules are not mocked automatically. Global mocks live in vitest.setup.ts (for example react-i18next , next/image ); mock other modules like ky or mime locally in test files.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

prd-creator

No summary provided by upstream source.

Repository SourceNeeds Review
General

e2e-tester

No summary provided by upstream source.

Repository SourceNeeds Review
General

vitest-best-practices

No summary provided by upstream source.

Repository SourceNeeds Review