<skill_overview> Plan and structure manual testing efforts effectively for maximum coverage and efficiency
Planning testing approach for new features Deciding between API and browser testing Organizing regression testing Prioritizing test scenarios
</skill_overview>
<testing_pyramid> Understanding the balance of different testing levels
End-to-end user flows through the UI <when_to_use>Critical user paths, happy paths, key workflows</when_to_use> Slow, brittle, but closest to user experience Login flow, checkout process, registration
<level name="Manual API Tests (20%)"> <purpose>Test API endpoints directly with curl</purpose> <when_to_use>API contracts, error handling, integration points</when_to_use> <characteristics>Faster than browser tests, focused on data and responses</characteristics> <examples>CRUD operations, authentication, error responses</examples> </level>
<level name="Exploratory Testing (30%)"> <purpose>Explore and discover issues without detailed scripts</purpose> <when_to_use>New features, finding edge cases, understanding behavior</when_to_use> <characteristics>Flexible, creative, finds unexpected issues</characteristics> <examples>Trying unusual inputs, combining features in novel ways</examples> </level>
<level name="Checklists & Test Cases (40%)"> <purpose>Structured, repeatable testing of documented scenarios</purpose> <when_to_use>Regression testing, feature coverage, documentation</when_to_use> <characteristics>Consistent, trackable, covers known scenarios</characteristics> <examples>Smoke tests, sanity tests, feature checklists</examples> </level>
<api_vs_browser_testing> When to test via API vs Browser
<decision_factors>
API tests are significantly faster than browser tests Prefer API for speed, Browser for UX validation
<factor name="Complexity"> <description>Browser tests can test complex UI interactions</description> <preference>Browser for multi-step flows, API for single operations</preference> </factor>
<factor name="Focus"> <description>API tests focus on data, Browser tests focus on user experience</description> <preference>API for data validation, Browser for UI behavior</preference> </factor>
<factor name="Stability"> <description>API tests are less brittle than browser tests</description> <preference>API for regression, Browser for critical user paths</preference> </factor>
<factor name="Coverage"> <description>Browser tests cover frontend + backend together</description> <preference>Browser for end-to-end flows, API for backend logic</preference> </factor>
</decision_factors>
<use_api_testing_for> CRUD operations (Create, Read, Update, Delete) Authentication and authorization Error handling and status codes Data validation and sanitization Performance testing of endpoints Testing integration with external services Batch operations and bulk actions Testing business logic directly </use_api_testing_for>
<use_browser_testing_for> User flows and workflows UI interactions (clicks, forms, navigation) Visual regression and layout issues Responsive design and mobile testing Frontend state management User experience and usability Complex interactions (drag and drop, modals) Accessibility testing </use_browser_testing_for> </api_vs_browser_testing>
<exploratory_testing> Unstructured testing based on curiosity and experience
No detailed test plan or scripts Tester designs tests on the fly Focuses on learning and discovery Finds edge cases and unexpected issues
<when_to_use> Early in development when features are changing For new features to understand behavior When time is limited but need broad coverage After structured testing finds no issues For complex features that are hard to script </when_to_use>
<exploratory_session_format> Charter - Mission and focus of the session Timebox - Duration of the session (typically 60-90 minutes) Areas to explore - Specific modules or features to investigate Notes - Observations, bugs, questions Issues found - Bugs and defects discovered Follow-up - Questions for developers or further testing needs </exploratory_session_format>
<exploratory_techniques>
Based on experience, guess where errors might occur Boundary values, unusual inputs, rapid operations
<technique name="Data Flow Testing"> <description>Follow data through the system</description> <examples>Check how data is stored, retrieved, transformed</examples> </technique>
<technique name="State Testing"> <description>Test different system states and transitions</description> <examples>Logged in vs logged out, empty vs full database</examples> </technique>
<technique name="Scenario Testing"> <description>Test realistic user scenarios</description> <examples>User cancels in middle of flow, uses back button</examples> </technique>
</exploratory_techniques> </exploratory_testing>
<smoke_testing> Quick validation that main functionality works Determine if system is stable enough for detailed testing
Tests only critical paths Very fast to execute Tests happy paths only No deep edge case testing
<when_to_run> After new deployment Before starting full regression When system had critical issues As first test each day or session Before code merge </when_to_run>
<typical_smoke_test_scenarios> Application loads and is accessible User can login Main navigation works Core CRUD operations work Database connection is stable External services are reachable </typical_smoke_test_scenarios>
<failure_criteria> If smoke tests fail, do not proceed with detailed testing Report critical issues immediately Block deployment if smoke tests fail </failure_criteria> </smoke_testing>
<regression_testing> Comprehensive testing to ensure existing functionality still works Find regressions introduced by new changes
Covers all major functionality Tests previously working features Includes edge cases and negative scenarios Time-consuming but comprehensive
<when_to_run> Before major release After significant code changes Periodically to catch subtle regressions After refactoring When fixing critical bugs </when_to_run>
<regression_strategy>
Test all features and functionality Before major releases or significant changes High time investment Maximum coverage
<strategy name="Partial Regression"> <description>Test only affected features and dependencies</description> <use>After targeted changes or bug fixes</use> <cost>Lower time investment</cost> <coverage>Targeted coverage</coverage> </strategy>
<strategy name="Risk-Based Regression"> <description>Prioritize high-risk and frequently used features</description> <use>When time is limited</use> <cost>Optimized time investment</cost> <coverage>Focused on critical areas</coverage> </strategy>
</regression_strategy>
<selecting_regression_tests> Core business logic Frequently used features Previously buggy areas Complex functionality Integration points between modules High-risk areas (payments, authentication) </selecting_regression_tests> </regression_testing>
<sanity_testing> Focused testing on recently changed or fixed functionality Verify that specific changes work as expected
Narrow scope, focused on changes Targets bug fixes and new features Faster than full regression May include related areas affected by changes
<when_to_run> After bug fix After feature implementation Before merging to main branch When testing specific module </when_to_run>
<sanity_testing_process> Identify what was changed (code, configuration, data) Determine affected functionality Create focused test scenarios for changed areas Test both happy paths and edge cases Verify related features are not broken Document any issues found </sanity_testing_process> </sanity_testing>
<priority_based_testing> Testing based on importance when time is limited <prioritization_criteria>
How much business would be affected Critical: Blocks business operations High: Major impact on users Medium: Moderate impact Low: Minor or rare impact
<criterion name="Frequency of Use"> <description>How often the feature is used</description> <levels>Daily: Very frequent use</levels> <levels>Weekly: Regular use</levels> <levels>Monthly: Occasional use</levels> <levels>Rare: Infrequently used</levels> </criterion>
<criterion name="User Impact"> <description>Number of users affected</description> <levels>All users: Core feature for everyone</levels> <levels>Many users: Popular feature</levels> <levels>Some users: Niche feature</levels> <levels>Few users: Rarely used</levels> </criterion>
<criterion name="Risk Level"> <description>Risk of failure and consequences</description> <levels>Critical: Security, data loss, payments</levels> <levels>High: Major functionality broken</levels> <levels>Medium: Minor functionality broken</levels> <levels>Low: Minor issues or edge cases</levels> </criterion>
</prioritization_criteria>
<priority_matrix> High business impact + High frequency = Critical priority High business impact + Low frequency = High priority Low business impact + High frequency = Medium priority Low business impact + Low frequency = Low priority </priority_matrix> </priority_based_testing>
<test_case_vs_checklist> Choosing between test cases and checklists
<test_cases> Detailed, scripted testing scenarios <when_to_use> For regression testing that needs to be repeatable When documenting exact steps and expectations For new testers or training When testing compliance or regulated scenarios For complex or critical functionality </when_to_use> Detailed, reproducible, trackable Time-consuming to create, may miss exploratory issues </test_cases>
<test_coverage> Assessing and managing test coverage <coverage_dimensions>
Percentage of features tested Count tested features / total features Core features: 100%, Secondary features: 80-90%
<dimension name="Scenario Coverage"> <description>Percentage of test scenarios covered</description> <measurement>Count executed scenarios / total documented scenarios</measurement> <target>Critical scenarios: 100%, Normal scenarios: 80-90%</target> </dimension>
<dimension name="Path Coverage"> <description>Percentage of code paths tested</description> <measurement>Count tested paths / total possible paths</measurement> <target>Happy paths: 100%, Edge cases: 70-80%</target> </dimension>
</coverage_dimensions>
<coverage_strategies>
Track coverage per module or feature Identify untested modules easily
<strategy name="Coverage by Risk"> <description>Prioritize high-risk areas for full coverage</description> <benefit>Focus on most critical functionality</benefit> </strategy>
<strategy name="Coverage by User Journey"> <description>Map coverage to user workflows</description> <benefit>Ensure key user paths are fully tested</benefit> </strategy>
</coverage_strategies> </test_coverage>
<time_management> Managing testing time effectively
Allocate fixed time for testing activities 60 minutes for exploratory testing, 30 minutes for smoke tests Prevents over-testing, ensures coverage of multiple areas
<strategy name="Session-Based Testing"> <description>Organize testing into focused sessions</description> <examples>Morning session: API testing, Afternoon session: UI testing</examples> <benefit>Structured approach, clear goals per session</benefit> </strategy>
<strategy name="MVP Testing"> <description>Test minimum viable set first</description> <examples>Test happy paths, then edge cases, then negative scenarios</examples> <benefit>Ensure core functionality works before detailed testing</benefit> </strategy>
<strategy name="Risk-Based Prioritization"> <description>Test high-risk items first</description> <examples>Critical bugs > Major bugs > Minor bugs</examples> <benefit>Most important issues found and addressed first</benefit> </strategy>
<planning_a_testing_session> How to structure a testing session
Understand the Goal What needs to be tested and why New feature, bug fix verification, regression testing
<step number="2"> <name>Gather Information</name> <description>Requirements, design docs, previous bugs</description> <examples>Read tickets, check PR descriptions, review issues</examples> </step>
<step number="3"> <name>Choose Approach</name> <description>Decide on testing method</description> <examples>API vs Browser, Structured vs Exploratory</examples> </step>
<step number="4"> <name>Create or Select Artifacts</name> <description>Test cases, checklists, or exploratory charter</description> <examples>Use existing test cases, create new checklist, define charter</examples> </step>
<step number="5"> <name>Set Up Environment</name> <description>Prepare testing environment and data</description> <examples>Get test accounts, prepare test data, configure environment</examples> </step>
<step number="6"> <name>Execute Testing</name> <description>Run tests and document results</description> <examples>Follow test cases, work through checklist, explore</examples> </step>
<step number="7"> <name>Report Results</name> <description>Document findings and bugs</description> <examples>Create bug reports, update test case status, write summary</examples> </step>