Web Performance & UX Optimization
End-to-end workflow: measure → analyze → explain → plan → execute
Preflight Checklist
Before starting, verify:
Playwright MCP - Browser automation available
-
Test: browser_navigate to target URL works
-
If unavailable: User must configure Playwright MCP server
Lighthouse CLI - Performance measurement
-
Test: lighthouse --version
-
Install if missing: npm install -g lighthouse
Target URL accessible
-
Production/staging preferred for reliable metrics
-
Local dev works but label results as "non-prod environment"
Workflow
- Measure
Primary: Lighthouse CLI
Run via scripts/lighthouse-runner.sh :
./lighthouse-runner.sh https://example.com # Mobile (default) ./lighthouse-runner.sh https://example.com --desktop # Desktop ./lighthouse-runner.sh http://localhost:3000 --no-throttling # Local dev
Secondary: Real-time Web Vitals via Playwright
Inject scripts/collect-vitals.js for interaction-based metrics:
// Navigate and inject await page.evaluate(collectVitalsScript); // Interact with page... // Retrieve metrics const vitals = await page.evaluate(() => window.GET_VITALS_SUMMARY());
Use Playwright collection when:
-
Testing specific user flows (not just page load)
-
Measuring INP on real interactions
-
Debugging CLS sources during navigation
- Analyze
Parse Lighthouse JSON for:
-
Performance score and individual metric values
-
Failed audits with estimated savings
-
Diagnostic details (render-blocking resources, unused JS, etc.)
Cross-reference with:
-
CWV thresholds → See references/core-web-vitals.md
-
Known issue patterns → See references/common-issues.md
Framework-specific analysis:
-
Next.js apps → See references/nextjs.md
-
React apps → See references/react.md
- Explain (Generate Report)
Output markdown report with this structure:
Performance & UX Report
Target: [URL] Environment: [Production | Staging | Local Dev*] Device: [Mobile | Desktop] Date: [timestamp]
*Local dev results may not reflect production performance
Executive Summary
Top 3 issues with expected impact:
- [Issue] → [Expected improvement]
- [Issue] → [Expected improvement]
- [Issue] → [Expected improvement]
Scorecard
| Metric | Value | Rating | Target |
|---|---|---|---|
| Performance | X/100 | 🟡 | ≥90 |
| LCP | Xs | 🔴 | ≤2.5s |
| CLS | X.XX | 🟢 | ≤0.1 |
| INP | Xms | 🟡 | ≤200ms |
| FCP | Xs | ≤1.8s | |
| TTFB | Xms | ≤800ms |
Rating: 🟢 Good | 🟡 Needs Improvement | 🔴 Poor
Findings
Finding 1: [Title]
Evidence:
- [Lighthouse audit / metric value / observation]
Root Cause:
- [Technical explanation of why this happens]
Fix Proposal:
- [Specific code/config change]
- [File(s) likely affected]
Verification:
- [How to confirm the fix worked]
Finding 2: ...
Quick Wins vs Strategic Fixes
Quick Wins (< 1 hour, high confidence)
- [Action item]
- [Action item]
Strategic Fixes (requires planning)
- [Action item with complexity note]
Re-test Plan
After implementing fixes:
-
Run Lighthouse again with same configuration
-
Compare: [specific metrics to watch]
-
Expected improvements: [quantified targets]
-
Plan (Create Tasks)
Convert findings to tasks using Plan/Task tools:
Task structure:
Subject: [Action verb] [specific target] Description:
- Rationale: [why this matters, linked to finding]
- Files/Areas: [likely locations to modify]
- Acceptance Criteria: [measurable outcome]
- Re-measure: [how to verify improvement]
Prioritization:
-
Impact (CWV improvement potential)
-
Confidence (certainty the fix will help)
-
Category labels: Hydration/SSR , Images , JS Bundle , Fonts , CSS , Caching , Third-Party
Example tasks:
High Priority:
-
[Images] Add width/height to hero image
- Impact: CLS 0.15 → <0.1
- Files: components/Hero.tsx
- Verify: CLS audit passes
-
[JS Bundle] Lazy load chart component
- Impact: TBT -200ms, LCP -0.5s
- Files: pages/dashboard.tsx
- Verify: Main bundle <200KB
Medium Priority: 3. [Fonts] Preload critical font
- Impact: FCP -100ms
- Files: app/layout.tsx
Environment Notes
Production/Staging (Recommended)
-
Metrics reflect real CDN, caching, compression
-
Use throttled Lighthouse (default) for user-representative results
Local Development
-
Always label results as "Local Dev Environment"
-
Use --no-throttling flag
-
Useful for: regression testing, debugging specific issues
-
Not reliable for: absolute performance numbers, TTFB, caching
When to Use Each Reference
Situation Reference File
Understanding metric thresholds core-web-vitals.md
Identifying root cause of issue common-issues.md
Next.js-specific optimizations nextjs.md
React patterns (any framework) react.md
Optional: Implementation Mode
By default, stop at plan + task list. If user requests implementation:
-
Confirm scope of changes
-
For high-confidence fixes, provide specific code patches
-
Still require user approval before applying
-
Re-run measurement after changes to verify