running-integration-tests

Integration Test Runner

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "running-integration-tests" with this command: npx skills add jeremylongshore/claude-code-plugins-plus-skills/jeremylongshore-claude-code-plugins-plus-skills-running-integration-tests

Integration Test Runner

Overview

Execute integration tests that validate interactions between multiple components, services, and external systems. Tests real database queries, API calls between services, message queue publishing/consuming, and file system operations without mocking the integration boundary.

Prerequisites

  • Integration test framework installed (Jest + Supertest, pytest, JUnit 5, or Go testing)

  • External services running (database, cache, message queue) via Docker Compose or Testcontainers

  • Database migrations applied and seed data loaded

  • Test configuration with connection strings pointing to test instances (not production)

  • Sufficient timeout settings (integration tests are slower than unit tests)

Instructions

  • Identify integration boundaries to test:

  • API routes with database queries (controller-to-repository flow).

  • Service-to-service HTTP communication.

  • Message queue producers and consumers.

  • File upload/download with storage services.

  • Cache read/write operations (Redis, Memcached).

  • Set up test infrastructure:

  • Start required services using docker-compose -f docker-compose.test.yml up -d .

  • Or use Testcontainers to programmatically start/stop containers per test suite.

  • Run database migrations against the test database.

  • Seed baseline data required by the test suite.

  • Write integration tests following these patterns:

  • API integration: Send HTTP requests via Supertest and assert responses including headers, status, and body.

  • Database integration: Execute the service method and verify database state with direct queries.

  • Event integration: Publish a message and verify the consumer processes it correctly.

  • Use real implementations, not mocks, at the integration boundary.

  • Manage test data isolation:

  • Wrap each test in a database transaction and roll back after assertion.

  • Or truncate tables in beforeEach and re-seed minimum required data.

  • Use unique identifiers per test to avoid collisions in shared databases.

  • Handle asynchronous operations:

  • Poll for expected state changes with timeout (e.g., wait for queue consumer to process).

  • Use event listeners or callbacks to signal completion.

  • Set generous timeouts (10-30 seconds) for external service interactions.

  • Run integration tests separately from unit tests:

  • Tag with @integration or place in a separate directory (tests/integration/ ).

  • Configure CI to run integration tests in a dedicated job with service containers.

  • Generate test results in JUnit XML format for CI reporting.

Output

  • Integration test files in tests/integration/ organized by feature

  • Docker Compose test configuration for service dependencies

  • Database seed and teardown scripts

  • JUnit XML test results for CI consumption

  • Integration test coverage report showing tested integration points

Error Handling

Error Cause Solution

Connection refused to database Database container not yet ready Add wait-for-it.sh or health check polling before running tests; increase startup timeout

Foreign key constraint violation Test data inserted in wrong order or cleanup incomplete Seed data in dependency order; use cascading deletes in teardown; wrap in transactions

Flaky test due to race condition Async consumer has not processed the message yet Use polling with timeout instead of fixed sleep; add event completion callbacks

Test passes locally, fails in CI CI uses different service versions or network config Pin Docker image versions; verify environment variables match; check CI service container logs

Slow test suite (>5 minutes) Too many integration tests or insufficient parallelization Run independent test suites in parallel CI jobs; use Testcontainers reuse mode; limit seed data

Examples

Supertest API integration test:

import request from 'supertest'; import { app } from '../src/app'; import { db } from '../src/database';

describe('POST /api/users', () => { beforeEach(async () => { await db.query('DELETE FROM users'); }); afterAll(async () => { await db.end(); });

it('creates a user and persists to database', async () => { const response = await request(app) .post('/api/users') .send({ name: 'Alice', email: 'alice@example.com' }) .expect(201); # HTTP 201 Created

expect(response.body).toMatchObject({ name: 'Alice' });
const row = await db.query('SELECT * FROM users WHERE email = $1', ['alice@example.com']);
expect(row.rows).toHaveLength(1);

}); });

pytest with database transaction rollback:

import pytest from myapp.services import UserService

@pytest.fixture def db_session(test_database): session = test_database.begin_nested() yield session session.rollback()

def test_create_user_persists_to_db(db_session): service = UserService(db_session) user = service.create(name="Alice", email="alice@test.com") assert user.id is not None found = db_session.query(User).filter_by(email="alice@test.com").one() assert found.name == "Alice"

Resources

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

backtesting-trading-strategies

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

svg-icon-generator

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

performance-lighthouse-runner

No summary provided by upstream source.

Repository SourceNeeds Review