e2e-test-automation

Automate end-to-end testing for web applications by launching Chrome/Chromium browser, executing predefined test cases, capturing evidence (screenshots/videos), and generating detailed test reports. This skill integrates with Cursor's built-in browser capabilities and MCP browser servers to provide seamless automated testing.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "e2e-test-automation" with this command: npx skills add oldwinter/skills/oldwinter-skills-e2e-test-automation

E2E Test Automation

Overview

Automate end-to-end testing for web applications by launching Chrome/Chromium browser, executing predefined test cases, capturing evidence (screenshots/videos), and generating detailed test reports. This skill integrates with Cursor's built-in browser capabilities and MCP browser servers to provide seamless automated testing.

Core Capabilities

  1. Test Case Management
  • Parse test case specifications from markdown files

  • Support structured test case format with operation steps, expected results, and common issues

  • Organize test cases by priority (P0, P1, P2) and category (functional, performance, usability)

  1. Automated Browser Testing
  • Launch Chrome/Chromium browser automatically using MCP browser servers

  • Execute test cases sequentially with clear logging

  • Support login/authentication flows

  • Handle dynamic content loading and async operations

  • Capture screenshots for each critical step

  1. Intelligent Validation
  • Verify expected outcomes against actual results

  • Detect common issues: timeouts, missing elements, incorrect content, broken links

  • Identify UX problems and potential bugs

  • Validate performance metrics (loading time < 60s, etc.)

  1. Comprehensive Reporting
  • Generate structured test reports in markdown format

  • Include pass/fail status, execution time, and failure reasons

  • Attach screenshots for failed cases

  • Highlight bugs and UX issues with severity levels

  • Provide recommendations for fixes

Workflow

Step 1: Load Test Cases

Read test case specifications from the provided markdown file or reference. Test cases should follow this structure:

Test Case Title

  • 操作步骤 (Operation Steps)
    • Step 1
    • Step 2
  • 预期反馈 (Expected Results)
    • Expected outcome 1
    • Expected outcome 2
  • 常见问题 (Common Issues)
    • Known issue 1
    • Known issue 2

If no test cases are provided, use the default test suite from references/default_test_cases.md .

Step 2: Prepare Test Environment

Check if browser MCP servers are available:

  • cursor-browser-extension

  • cursor-ide-browser

Launch browser using the appropriate MCP server tools

If test requires authentication:

  • Navigate to login page

  • Input credentials (from test specification or user prompt)

  • Verify successful login

Step 3: Execute Test Cases

For each test case:

Navigate to target page

  • Log the navigation action

  • Wait for page load completion

Execute operation steps

  • Follow each step in the test case

  • Handle dynamic elements (wait for visibility)

  • Capture screenshot after critical operations

Validate expected results

  • Check each expected outcome

  • Measure performance metrics (if specified)

  • Record actual vs expected differences

Detect common issues

  • Check for known problems listed in test case

  • Identify UX problems (confusing UI, missing feedback, etc.)

  • Note any unexpected behaviors

Record results

  • Status: PASS / FAIL / WARNING

  • Execution time

  • Failure reason (if failed)

  • Screenshots (if failed or warning)

  • Bug severity: CRITICAL / HIGH / MEDIUM / LOW

Step 4: Generate Test Report

Create a comprehensive test report with the following sections:

E2E Test Execution Report

Test Summary

  • Test URL: [URL]
  • Test Account: [Account]
  • Total Cases: [N]
  • Passed: [N]
  • Failed: [N]
  • Warnings: [N]
  • Execution Time: [Duration]
  • Test Date: [Timestamp]

Test Environment

  • Browser: Chrome/Chromium
  • Test Tool: Cursor MCP Browser
  • Test Executor: [Executor Name]

Detailed Results

✅ Passed Cases ([N])

[List of passed cases with brief description]

❌ Failed Cases ([N])

Case: [Test Case Title]

  • Status: FAIL
  • Execution Time: [Duration]
  • Failure Reason: [Detailed reason]
  • Expected: [What was expected]
  • Actual: [What actually happened]
  • Screenshot: [Path or embedded image]
  • Bug Severity: [CRITICAL/HIGH/MEDIUM/LOW]
  • Recommendation: [How to fix]

⚠️ Warning Cases ([N])

[Cases that passed but have UX issues or concerns]

Bugs and Issues Summary

Critical Issues ([N])

  1. [Issue description with case reference]

High Priority Issues ([N])

  1. [Issue description with case reference]

Medium Priority Issues ([N])

  1. [Issue description with case reference]

Low Priority Issues ([N])

  1. [Issue description with case reference]

UX Feedback

[General UX observations and improvement suggestions]

Recommendations

[Overall recommendations for improving quality]

Step 5: Cleanup

  • Close browser instances

  • Save screenshots to appropriate directory

  • Archive test artifacts

Usage Examples

Example 1: Run all test cases from specification

User: "根据 @e2e-test.md 的测试用例,自动执行所有测试并生成报告"

Actions:

  • Read test cases from e2e-test.md

  • Launch browser and navigate to test URL

  • Login with provided credentials

  • Execute all 28 test cases sequentially

  • Generate comprehensive test report

Example 2: Run specific test categories

User: "只运行 Chat 相关的测试用例"

Actions:

  • Parse test cases and filter for Chat category

  • Execute filtered test cases

  • Generate focused report for Chat functionality

Example 3: Quick smoke test

User: "执行 P0 优先级的测试用例"

Actions:

  • Filter for P0 priority cases (from case-p0/p0.md )

  • Execute critical path tests

  • Generate quick validation report

Test Case Format

Test cases should be provided in markdown format following this structure:

测试网址:[URL] 测试账号密码:[username] / [password]


  1. [Test Case Title]
  • 操作步骤
    • [Step 1]
    • [Step 2]
  • 预期反馈
    • [Expected result 1]
    • [Expected result 2]
  • 常见问题
    • [Common issue 1]
    • [Common issue 2]

Performance Metrics

The skill automatically validates common performance requirements:

  • Page load time < 60s

  • Search/query response time < 60s

  • Export/download completion < 30s

  • No console errors during critical operations

Error Handling

When test execution encounters errors:

  • Timeout errors: Capture screenshot, log timeout duration, mark as FAIL

  • Element not found: Capture DOM snapshot, suggest possible selectors, mark as FAIL

  • Assertion failures: Log expected vs actual, capture screenshot, mark as FAIL

  • Unexpected errors: Full error stack, screenshot, mark as FAIL with CRITICAL severity

Browser Compatibility

Primary browser: Chrome/Chromium (via Cursor MCP browser servers)

If MCP browser servers are not available, the skill can alternatively:

  • Use Playwright for browser automation

  • Use Selenium WebDriver

  • Guide user to install required dependencies

Resources

references/

  • default_test_cases.md : Template and default test cases

  • test_report_template.md : Standard report format

  • browser_selectors.md : Common CSS selectors for web elements

scripts/

  • execute_tests.py : Main test execution engine (Python)

  • browser_automation.py : Browser control utilities

  • report_generator.py : Test report generation

  • screenshot_capture.py : Screenshot and evidence capture

assets/

  • test_report_styles.css : Styling for HTML reports

  • icons/ : Status icons (pass/fail/warning)

Integration with Cursor

This skill leverages Cursor's built-in capabilities:

  • MCP Browser Servers: Use cursor-browser-extension or cursor-ide-browser for browser automation

  • File System: Save test reports and screenshots to workspace

  • Terminal: Execute test scripts in background if needed

  • Markdown Preview: Display test reports directly in Cursor

Best Practices

  • Test Isolation: Each test case should be independent and not rely on previous test state

  • Explicit Waits: Use explicit waits for dynamic content, avoid hard-coded sleeps

  • Clear Assertions: Each validation should have a clear expected vs actual comparison

  • Evidence Collection: Always capture screenshots for failures

  • Descriptive Failures: Failure messages should help developers quickly identify root cause

  • Performance Monitoring: Track and report slow operations even if they pass

  • UX Observations: Note confusing UI/UX even in passing tests

Limitations

  • Requires working internet connection for web application testing

  • MCP browser servers must be available or Playwright/Selenium installed

  • Cannot test mobile-specific behaviors (mobile browser simulation only)

  • Limited support for testing file uploads/downloads (workarounds available)

  • Cannot bypass CAPTCHA or advanced bot detection automatically

Troubleshooting

MCP browser not available: Run: pip install playwright && playwright install chromium

Element selectors not working: Check references/browser_selectors.md for recommended selector strategies

Tests timing out: Increase timeout thresholds in test case specifications or use more specific waits

Screenshots not capturing: Verify write permissions in workspace directory

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Web3

defining-product-vision

No summary provided by upstream source.

Repository SourceNeeds Review
Web3

problem-definition

No summary provided by upstream source.

Repository SourceNeeds Review
General

personal-productivity

No summary provided by upstream source.

Repository SourceNeeds Review