sdlc-testing

Generate Testing & Quality documentation for SDLC projects. Compliant with BS ISO/IEC/IEEE 29119-3:2013 (supersedes IEEE 829:2008 and BS 7925-2:1998). Covers Software Test Plan, Test Case Specifications (with normative 29119-3 fields), V&V Plan, Validation Test Report, Incident Report, Test Completion Report, and Peer Review Reports. Use when establishing testing strategy, creating test documentation, or conducting quality validation.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "sdlc-testing" with this command: npx skills add peterbamuhigire/skills-web-dev/peterbamuhigire-skills-web-dev-sdlc-testing

Required Plugins

Superpowers plugin: MUST be active for all work using this skill. Use throughout the entire build pipeline — design decisions, code generation, debugging, quality checks, and any task where it offers enhanced capabilities. If superpowers provides a better way to accomplish something, prefer it over the default approach.

SDLC Testing Skill

Generate a complete Testing & Quality documentation suite for software development projects. This skill produces 5 documents that establish the testing baseline, define test cases, verify and validate the system, report results, and standardize peer reviews.

When to Use

  • Establishing a testing strategy for a SaaS project
  • Writing formal test plans aligned with IEEE 829
  • Creating test case specifications with traceability to requirements
  • Planning verification and validation activities (V&V)
  • Documenting validation test results for release decisions
  • Standardizing peer review and code inspection processes
  • Preparing for phase gate reviews that require test evidence

When NOT to Use

  • Writing actual test code (unit tests, integration tests) -- use android-tdd skill instead
  • Validating AI-generated code -- use ai-error-handling skill (5-layer validation)
  • Planning security testing only -- use vibe-security-skill for security patterns
  • Planning a single feature -- use feature-planning skill (includes testing strategy)
  • Creating project plans -- use sdlc-planning skill (SDP, QA Plan, SRS)
  • Designing architecture -- use sdlc-design skill

Document Inventory

#DocumentFilePurposeAudiencePhase
1Software Test Plantemplates/software-test-plan.mdTesting strategy, tools, environments, schedule, completion criteriaQA leads, PMs, devsAfter SRS + SDD
2Test Case Specificationstemplates/test-case-specifications.mdNormative 29119-3 test cases: ID, objective, priority, traceability, preconditions, input, expected resultTest engineers, devsDuring development
3Validation & Verification Plantemplates/validation-verification-plan.mdV&V approach (built right + right product)QA mgrs, PMs, complianceAfter SRS + SDD
4Validation Test Reporttemplates/validation-test-report.mdTest execution results and Go/No-Go release decisionPMs, stakeholders, QABefore release
5Peer Review Reporttemplates/peer-review-report.mdCode, design, and document review findingsDev team, tech leadsThroughout SDLC
6Incident Reporttemplates/incident-report.mdAnomaly record: ID, timing, context, description, impact, urgency, statusQA, dev leadsDuring execution
7Test Completion Reporttemplates/test-completion-report.mdTest summary, deviations, completion criteria met, residual risks, lessons learnedPMs, stakeholders, complianceEnd of test phase

Standards Basis

This skill generates documentation compliant with BS ISO/IEC/IEEE 29119-3:2013, the current international standard for software test documentation. It supersedes IEEE 829:2008 and BS 7925-2:1998. Key structural difference: 29119-3 defines a strict document hierarchy — Organizational Test Strategy → Project Test Plan → Sub-process Test Plans → Test Design Specification → Test Case Specification → Incident Report → Test Completion Report. Annex T provides clause-level cross-walks from legacy standards for migration.

Normative Test Case Fields (BS 29119-3 §7.3): Every test case must include: (1) Unique ID, (2) Objective/Purpose, (3) Priority (H/M/L), (4) Traceability to requirement ID, (5) Preconditions (exact system state), (6) Input (exact stimulus), (7) Expected Result (deterministic pass/fail — the test oracle), (8) Actual Result (populated during execution), (9) Test Result (Pass / Incident Report number).

Test Oracle Rule: Every expected result must be specific enough to yield an unambiguous Pass or Fail without interpretation. If the expected result depends on judgment ("the response looks reasonable"), the requirement has not been adequately specified — flag [VERIFIABILITY-FAIL] and return to Skill 05 for clarification.

Testing Philosophy

TDD-First Development

All production code follows the Red-Green-Refactor cycle. Tests are written before implementation. Reference the android-tdd skill for the complete TDD workflow.

Test Pyramid (70/20/10)

        /  UI/E2E  \       10% - Compose Testing, Espresso, browser E2E
       /------------\
      / Integration  \     20% - Room+MockWebServer, API with real DB
     /----------------\
    /   Unit Tests     \   70% - JUnit 5, MockK, PHPUnit (fast, isolated)
   /====================\

Shift-Left Testing

Move testing activities as early as possible in the SDLC:

ActivityTraditional PhaseShift-Left Phase
Unit testingAfter codingDuring coding (TDD)
Security testingPre-releaseDuring design + development
Performance testingPre-releaseDuring development (benchmarks)
Code reviewAfter feature completePer-commit (PR-based)
Requirements reviewAfter SRS finalizedDuring requirements gathering

Generation Workflow

Generate documents in this order. Each builds on the previous.

Prerequisite: SRS from sdlc-planning + SDD from sdlc-design
    |
Step 1: Software Test Plan (overall strategy)
    |
Step 2: Validation & Verification Plan (V&V approach)
    |
Step 3: Test Case Specifications (detailed test cases)
    |
    [--- Development & testing execution happens here ---]
    |
Step 4: Peer Review Reports (during development)
    |
Step 5: Validation Test Report (before release)

Prerequisites

InputSourceRequired?
Software Requirements Spec (SRS)sdlc-planning outputYes
Software Design Document (SDD)sdlc-design outputRecommended
Quality Assurance Plansdlc-planning outputRecommended
Risk Management Plansdlc-planning outputRecommended
Feature specificationsfeature-planning outputIf feature-level

Testing Layers Overview

Unit Testing

PlatformFrameworkScopeSpeed
AndroidJUnit 5 + MockK + TurbineViewModels, UseCases, Repositories, Mappers<1ms each
PHPPHPUnitServices, validators, business logic, helpers<10ms each

Integration Testing

PlatformFrameworkScope
AndroidRoom in-memory DB + MockWebServerDAO queries, API contracts, auth flows
PHPPHPUnit + real MySQL test DBAPI endpoints, stored procedures, multi-table ops

UI / E2E Testing

PlatformFrameworkScope
AndroidCompose Testing + EspressoScreen rendering, navigation, user flows
WebBrowser testing (manual + automated)CRUD workflows, form validation, responsive layout

User Acceptance Testing (UAT)

Distinguish Alpha and Beta testing explicitly in the Test Plan:

StageLocationTestersFocus
AlphaControlled environment (internal/QA lab)Internal testers, stakeholdersFunctional correctness, defect discovery
BetaReal-world environment (staging/limited production)Selected external usersReal-world usability, edge-case discovery, performance under real load

Seven-Step Defect Resolution Protocol (Rex Black, 2009)

Every bug report triggers this handoff sequence. The boundary between Step 3 and Step 4 is the critical management line — testers own isolation; developers own debugging.

StepOwnerAction
1TesterReproduce — determine exact minimal sequence; check for intermittence
2TesterDiscriminate — test bug or system bug?
3TesterIsolate — identify external factors (config, data, workflows) affecting symptoms
4DeveloperRoot cause — find cause in code, hardware, network, or environment
5DeveloperRepair — fix without introducing new problems
6DeveloperVerify fix — confirm the fix is clean before handoff
7TesterConfirm + Regression — does it pass the failing test? Does everything else still work?

If the fix fails Step 7: reopen the bug report. If it passes but breaks a different test: open a new bug report.

Regression Testing

Regression testing is a first-class test type and must be documented separately in the Test Plan. Define: the regression suite scope (which previously-passing test cases are re-run), the trigger conditions (every PR merge, every release candidate), and the acceptable pass rate before proceeding.

Test Data Management

The Test Plan must include a ## Test Data Management section covering: how test fixtures are created, how tenant isolation is maintained in test data (separate franchise_id values per test scenario), how sensitive data is anonymized in non-production environments, and who owns test data lifecycle.

Test Data Readiness Report (B-08)

Before test execution begins, confirm:

  • Test fixtures created for all scenarios (normal, boundary, error)
  • Tenant isolation verified — each test scenario uses a distinct franchise_id (or equivalent)
  • Sensitive production data anonymised in all non-production environments
  • Test data owner identified and data lifecycle documented
  • Rollback/reset procedure defined for test data after execution

Test Environment Readiness Report (B-08)

Before test execution begins, confirm:

  • All hardware/software components match production specifications
  • All software components under formal CM control with release notes
  • Test environment access provisioned for all testers
  • Monitoring and logging enabled in test environment
  • Smoke test completed successfully (entry criterion for system test phase)
  • Config ID documented (coded identifier for the exact environment composition)

Security Testing

AreaMethodReference
Tenant isolationAutomated + manual: cross-tenant access deniedvibe-security-skill
Auth bypassToken manipulation, session hijacking attemptsdual-auth-rbac
InjectionSQL injection, XSS, CSRF payloadsOWASP Top 10
Data exposureAPI response auditing, error message reviewvibe-security-skill

Performance Testing

MetricTargetTool
API response time< 500ms (P95)curl timing, load test tools
Web page load< 2s (first contentful paint)Browser DevTools, Lighthouse
Android cold start< 3sAndroid Profiler
Database query< 100ms (P95)MySQL slow query log, EXPLAIN

Cross-References to Existing Skills

Upstream Skills (use BEFORE this skill)

SkillRelationship
sdlc-planningProvides SRS (requirement IDs for traceability), QA Plan (quality standards), Risk Plan (test-specific risks).
sdlc-designProvides SDD (architecture to verify), database design (schema to test), API design (contracts to validate).
project-requirementsRaw requirements gathered via interview. Feed into SRS before creating test docs.

Parallel Skills (use ALONGSIDE this skill)

SkillRelationship
android-tddActual TDD implementation patterns (Red-Green-Refactor, layer-specific tests). This skill documents; android-tdd implements.
ai-error-handling5-layer validation stack for AI-generated code. Complements this skill's formal V&V processes.
ai-error-prevention"Trust but verify" patterns. Use alongside peer review processes.
vibe-security-skillSecurity testing patterns, OWASP mapping. Reference in test plans and security test cases.
feature-planningFeature-level testing strategy. This skill covers project-level testing.

Downstream Skills (use AFTER this skill)

SkillRelationship
google-play-store-reviewPlay Store compliance testing. Uses test results from this skill's reports.

Sibling SDLC Skills

SkillPhaseStatus
sdlc-planningPlanning & ManagementAvailable
sdlc-designDesign & ArchitectureAvailable
sdlc-testingTesting & QualityThis Skill
sdlc-user-deployDelivery & DeploymentAvailable

Adaptation Rules

SaaS vs Standalone

AspectMulti-Tenant SaaSStandalone App
Tenant isolation testsRequired (franchise_id in every query)Not applicable
RBAC test casesFull matrix (super_admin, owner, staff)Simple role tests
Cross-tenant securityDedicated test suiteOmit
Test data setupPer-tenant fixtures (franchise_id = 1, 2)Single dataset

Android + Web vs Web-Only

AspectAndroid + WebWeb-Only
Test frameworksJUnit 5, MockK, Compose Testing + PHPUnitPHPUnit only
UI testingCompose tests + browser testsBrowser tests only
Offline testingRoom caching, network error scenariosN/A
Device matrixAPI levels 26-35, multiple screen sizesBrowser matrix

CI/CD Integration

StageTriggerTests Run
Pre-commitLocal commitUnit tests (fast)
PR validationPR created/updatedUnit + integration tests
Merge to developPR mergedFull test suite + coverage
Release candidateTag createdFull suite + security + performance
Post-deployProduction deploySmoke tests only

Quality Checklist

  • All 7 documents generated (or justified why one was skipped)
  • Each document stays under 500 lines
  • Test Plan references SRS requirement IDs for traceability
  • Test Plan includes: risk register, completion criteria, suspension/resumption criteria, communication plan, environment requirements, roles (per 29119-3 §6.2)
  • Test cases use naming convention TC-[MODULE]-[TYPE]-[###]
  • Every test case has the 9 normative 29119-3 fields (ID, objective, priority, traceability, preconditions, input, expected result, actual result, test result)
  • Every expected result is a deterministic test oracle — no judgment calls
  • Every SRS requirement ID (FR-xxx, NFR-xxx) is traced to at least one test case
  • V&V Plan covers both verification (built right) and validation (right product)
  • Test Plan distinguishes Alpha UAT (internal controlled) from Beta UAT (real-world users)
  • Regression testing section covers: suite scope, trigger conditions, pass rate threshold
  • Test Data Management section covers: fixture creation, tenant isolation, data anonymization
  • Test Data Readiness Report completed before execution begins
  • Test Environment Readiness Report completed before execution begins
  • Test Report includes pass rates, coverage, defect resolution protocol, and Go/No-Go recommendation
  • Incident Report template populated for every detected anomaly
  • Test Completion Report produced at phase close: summary, deviations, residual risks, lessons learned
  • Peer Review Report includes tech-stack-specific checklists
  • Multi-tenant isolation addressed in test cases and V&V plan
  • Test environments match deployment environments (Windows/Ubuntu/Debian)
  • Security test cases reference vibe-security-skill OWASP mapping
  • Performance benchmarks have numeric targets (not vague language)
  • Documents cross-reference each other and upstream SRS/SDD

Anti-Patterns

Anti-PatternWhy It FailsDo This Instead
No formal test planAd-hoc testing misses critical pathsWrite STP before testing begins
Test cases without expected resultsCan't determine pass/failEvery TC has explicit expected results (test oracle)
Vague expected results ("response looks OK")Not a test oracle; tester must interpretState exact output: value, format, timing, error code
No traceability to requirementsCan't prove coverageMap every TC to FR-xxx or NFR-xxx
Testing only happy pathsEdge cases cause production failuresInclude negative, boundary, and error cases
No test data strategyInconsistent, flaky testsDefine fixtures, factories, seed data with tenant isolation
Skipping security testingVulnerabilities ship to productionInclude security test suite in every release
No peer review processBugs caught late, inconsistent codeStandardize reviews with checklists
Rubber-stamp reviewsReviews provide no valueRequire findings documented, metrics tracked
Testing in production onlyUsers find bugs, not testersTest in staging first, smoke test prod
No regression suitePassing tests break silently between releasesDefine regression suite, trigger on every RC
No incident trackingAnomalies lose contextOpen an Incident Report for every anomaly during execution
No Test Completion ReportPhase never formally closesProduce TCR before passing to next lifecycle phase

Template Files

Each template provides the complete structure, section-by-section guidance, examples tailored to the tech stack, anti-patterns, and a quality checklist.

  1. Software Test Plan
  2. Test Case Specifications
  3. Validation & Verification Plan
  4. Validation Test Report
  5. Peer Review / Inspection Report
  6. Incident Report
  7. Test Completion Report

Back to: Skills Repository Related: sdlc-planning | android-tdd | vibe-security-skill | ai-error-handling Last Updated: 2026-03-15 (upgraded to BS ISO/IEC/IEEE 29119-3:2013 per Winston, BS Standards; strengthened per Adjei 2023, Splunk Product is Docs)

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

google-play-store-review

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

jetpack-compose-ui

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

api-error-handling

No summary provided by upstream source.

Repository SourceNeeds Review