test-strategy-planning

Test Strategy Planning

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "test-strategy-planning" with this command: npx skills add melodic-software/claude-code-plugins/melodic-software-claude-code-plugins-test-strategy-planning

Test Strategy Planning

When to Use This Skill

Use this skill when:

  • Test Strategy Planning tasks - Working on create comprehensive test strategy documents following ieee 829 structure. plan test approach, scope, resources, and success criteria for software projects

  • Planning or design - Need guidance on Test Strategy Planning approaches

  • Best practices - Want to follow established patterns and standards

Overview

A test strategy defines the overall approach to testing a software system. It establishes the scope, objectives, resources, schedule, and success criteria for testing activities before development begins.

IEEE 829 Test Documentation Structure

Document Purpose

Test Plan Master document defining test approach

Test Design Specification Refinement of test approach for features

Test Case Specification Individual test case details

Test Procedure Specification Step-by-step execution procedures

Test Item Transmittal Report Test deliverables handoff

Test Log Chronological record of test execution

Test Incident Report Documentation of anomalies

Test Summary Report Overall test results and metrics

Test Strategy Template

Test Strategy: [Project Name]

1. Introduction

1.1 Purpose

[Why this test strategy exists and what it covers]

1.2 Scope

In Scope:

  • [Feature/component 1]
  • [Feature/component 2]

Out of Scope:

  • [Feature/component X]
  • [Third-party integrations (unless specified)]

1.3 References

  • [Requirements document]
  • [Architecture document]
  • [Related standards]

2. Test Objectives

2.1 Business Objectives

  • [Business goal 1 → Testing coverage]
  • [Business goal 2 → Testing coverage]

2.2 Quality Objectives

Quality AttributeTargetMeasurement
Functional Correctness100% critical pathsAll acceptance tests pass
Performance< 200ms p95 responseLoad test results
SecurityNo critical vulnerabilitiesSecurity scan results
Reliability99.9% uptimeChaos testing results

3. Test Approach

3.1 Test Levels

LevelScopeResponsibilityTools
UnitIndividual methods/classesDevelopersxUnit
IntegrationComponent interactionsDevelopersxUnit + TestContainers
SystemEnd-to-end workflowsQAPlaywright
AcceptanceBusiness requirementsQA + POSpecFlow

3.2 Test Types

TypeCoverageApproach
FunctionalAll requirementsRequirement-based
PerformanceCritical pathsLoad/stress testing
SecurityOWASP Top 10SAST + DAST + Pentest
UsabilityKey user journeysHeuristic evaluation
AccessibilityWCAG 2.2 AAAutomated + manual

3.3 Risk-Based Prioritization

Risk AreaLikelihoodImpactTest Priority
[Payment processing]MediumHighP1 - Extensive
[User authentication]LowHighP1 - Extensive
[Reporting]LowMediumP2 - Standard
[Admin settings]LowLowP3 - Basic

4. Test Environment

4.1 Environment Strategy

EnvironmentPurposeDataRefresh Cycle
DevDeveloper testingSyntheticOn-demand
QAFunctional testingMasked productionWeekly
StagingPre-prod validationProduction cloneBefore release
PerformanceLoad testingScaled syntheticBefore release

4.2 Infrastructure Requirements

  • [Server specifications]
  • [Network requirements]
  • [Third-party service access]

5. Test Data

5.1 Data Strategy

  • Synthetic data: Generated for unit/integration tests
  • Masked production data: For realistic QA testing
  • Performance data: Scaled to production volumes

5.2 Data Privacy

  • [Anonymization requirements]
  • [PII handling procedures]
  • [Data retention policies]

6. Entry and Exit Criteria

6.1 Entry Criteria

  • Requirements reviewed and approved
  • Test environment available
  • Test data prepared
  • Test cases reviewed
  • Build deployed to test environment

6.2 Exit Criteria

  • All P1 test cases executed
  • No critical defects open
  • Code coverage ≥ 80%
  • Performance targets met
  • Security scan passed

7. Defect Management

7.1 Severity Levels

SeverityDescriptionResolution Time
CriticalSystem unusable4 hours
HighMajor feature broken1 day
MediumFeature degraded1 week
LowMinor issueNext release

7.2 Defect Workflow

  1. Tester logs defect with reproduction steps
  2. Dev lead triages and assigns
  3. Developer fixes and unit tests
  4. Tester verifies fix
  5. Defect closed or reopened

8. Test Deliverables

DeliverableAudienceFrequency
Daily Test StatusDev teamDaily
Test Summary ReportManagementPer sprint
Defect MetricsAll stakeholdersWeekly
Release Test ReportRelease teamPer release

9. Roles and Responsibilities

RoleResponsibilities
Test LeadStrategy, planning, reporting
QA EngineerTest design, execution, defects
DeveloperUnit tests, test support
Product OwnerAcceptance criteria, UAT

10. Schedule

PhaseStartEndMilestone
Test Planning[Date][Date]Strategy approved
Test Design[Date][Date]Test cases ready
Test Execution[Date][Date]All tests run
UAT[Date][Date]Sign-off

11. Risks and Mitigations

RiskProbabilityImpactMitigation
Environment unavailableMediumHighBackup environment ready
Resource shortageLowMediumCross-training
Requirement changesHighMediumChange control process

Test Approach Patterns

Risk-Based Testing

Prioritize testing based on:

  • Business criticality: Revenue-impacting features first

  • Technical complexity: Complex components need more testing

  • Change frequency: Frequently changed areas need regression focus

  • Defect history: Bug-prone areas need more attention

Shift-Left Testing

Move testing earlier in the SDLC:

  • Requirements testing (review for testability)

  • Design testing (architecture review, threat modeling)

  • Static analysis (before code review)

  • Unit testing (during development)

Continuous Testing

Integrate testing into CI/CD:

  • Pre-commit: Linting, unit tests

  • PR: Integration tests, code coverage

  • Merge: Full regression, security scan

  • Deploy: Smoke tests, synthetic monitoring

Quality Metrics

Coverage Metrics

Metric Target Measurement

Requirement coverage 100% Requirements traced to tests

Code coverage (line) ≥80% Coverage tool output

Code coverage (branch) ≥70% Coverage tool output

Risk coverage 100% P1 risks Risk-test mapping

Execution Metrics

Metric Target Measurement

Test pass rate ≥95% Pass / Total

Defect detection rate ≥90% Pre-release / Total

Test automation rate ≥70% Automated / Total

Defect leakage <5% Production defects / Total

.NET Example: Test Strategy Configuration

// Directory.Build.props - Shared test configuration <Project> <PropertyGroup> <TreatWarningsAsErrors>true</TreatWarningsAsErrors> <CollectCoverage>true</CollectCoverage> <CoverletOutputFormat>cobertura</CoverletOutputFormat> <Threshold>80</Threshold> </PropertyGroup> </Project>

Integration Points

Inputs from:

  • Requirements documents → Test scope

  • Architecture documents → Test levels

  • Risk assessments → Test prioritization

Outputs to:

  • test-pyramid-design skill → Pyramid ratios

  • test-case-design skill → Test techniques

  • CI/CD pipeline → Automation scope

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

design-thinking

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

plantuml-syntax

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

system-prompt-engineering

No summary provided by upstream source.

Repository SourceNeeds Review