Paths: File paths (shared/ , references/ , ../ln-* ) are relative to skills repo root. If not found at CWD, locate this SKILL.md directory and go up one level for repo root. If shared/ is missing, fetch files via WebFetch from https://raw.githubusercontent.com/levnikolaevich/claude-code-skills/master/skills/{path} .
Inputs
Input Required Source Description
storyId
Yes args, git branch, kanban, user Story to process
Resolution: Story Resolution Chain. Status filter: To Review
Test Researcher
Type: L3 Worker
Researches real-world problems and edge cases before test planning to ensure tests cover actual user pain points, not just AC.
Purpose & Scope
-
Research common problems for the feature domain using Web Search, MCP Ref, Context7.
-
Analyze how competitors solve the same problem.
-
Find customer complaints and pain points from forums, StackOverflow, Reddit.
-
Post structured findings as a Linear comment for later test-planning steps.
-
No test creation or status changes.
When to Use
This skill should be used when:
-
Use at the start of a test-planning workflow when feature-domain evidence is needed
-
Story has non-trivial functionality (external APIs, file formats, authentication)
-
Need to discover edge cases beyond AC
Skip research when:
-
Story is trivial (simple CRUD, no external dependencies)
-
Research comment already exists on Story
-
User explicitly requests to skip
Workflow
Phase 1: Discovery
MANDATORY READ: Load shared/references/input_resolution_pattern.md
Resolve storyId: Run Story Resolution Chain per guide (status filter: [To Review]).
Auto-discover Team ID from docs/tasks/kanban_board.md
Phase 2: Extract Feature Domain
-
Fetch Story from Linear
-
Parse Story goal and AC to identify:
-
What technology/API/format is involved?
-
What is the user's goal? (e.g., "translate XLIFF files", "authenticate via OAuth")
-
Extract keywords for research queries
Phase 3: Research Common Problems
Use available tools to find real-world problems:
Web Search:
-
"[feature] common problems"
-
"[format] edge cases"
-
"[API] gotchas"
-
"[technology] known issues"
MCP Ref:
-
ref_search_documentation("[feature] error handling best practices")
-
ref_search_documentation("[format] validation rules")
Context7:
-
Query relevant library docs for known issues
-
Check API documentation for limitations
Phase 4: Research Competitor Solutions
Web Search:
-
"[competitor] [feature] how it works"
-
"[feature] comparison"
-
"[product type] best practices"
Analysis:
-
How do market leaders handle this functionality?
-
What UX patterns do they use?
-
What error handling approaches are common?
Phase 5: Research Customer Complaints
Web Search:
-
"[feature] complaints"
-
"[product type] user problems"
-
"[format] issues reddit"
-
"[format] issues stackoverflow"
Analysis:
-
What do users actually struggle with?
-
What are common frustrations?
-
What gaps exist between user expectations and typical implementations?
Phase 6: Compile and Post Findings
Compile findings into categories:
-
Input validation issues (malformed data, encoding, size limits)
-
Edge cases (empty input, special characters, Unicode)
-
Error handling (timeouts, rate limits, partial failures)
-
Security concerns (injection, authentication bypass)
-
Competitor advantages (features we should match or exceed)
-
Customer pain points (problems users actually complain about)
Post Linear comment on Story with research summary:
Test Research: {Feature}
Sources Consulted
Common Problems Found
- Problem 1: Description + test case suggestion
- Problem 2: Description + test case suggestion
Competitor Analysis
- Competitor A: How they handle this + what we can learn
- Competitor B: Their approach + gaps we can exploit
Customer Pain Points
- Complaint 1: What users struggle with + test to prevent
- Complaint 2: Common frustration + how to verify we solve it
Recommended Test Coverage
- Test case for problem 1
- Test case for competitor parity
- Test case for customer pain point
This research informs both manual tests (ln-522) and automated tests (ln-523).
Critical Rules
-
No test creation: Only research and documentation.
-
No status changes: Only Linear comment.
-
Source attribution: Always include URLs for sources consulted.
-
Actionable findings: Each problem should suggest a test case.
-
Skip trivial Stories: Don't research "Add button to page".
Runtime Summary Artifact
MANDATORY READ: Load shared/references/test_planning_summary_contract.md , shared/references/test_planning_worker_runtime_contract.md
Runtime profile:
-
family: test-planning-worker
-
worker: ln-521
-
summary kind: test-planning-worker
-
payload fields used by coordinators: worker , status , warnings , research_comment_path
Invocation rules:
-
standalone: omit runId and summaryArtifactPath
-
managed: pass both runId and exact summaryArtifactPath
-
always write the validated summary before terminal outcome
Definition of Done
-
Feature domain extracted from Story (technology/API/format identified)
-
Common problems researched (Web Search + MCP Ref + Context7)
-
Competitor solutions analyzed (at least 1-2 competitors)
-
Customer complaints found (forums, StackOverflow, Reddit)
-
Findings compiled into categories
-
Linear comment posted with "## Test Research: {Feature}" header
-
At least 3 recommended test cases suggested
Output: Linear comment with research findings for ln-522 and ln-523 to use.
Reference Files
-
Research methodology: Web Search, MCP Ref, Context7 tools
-
Comment format: Structured markdown with sources
-
Downstream consumers: ln-522-manual-tester, ln-523-auto-test-planner
-
MANDATORY READ: Load shared/references/research_tool_fallback.md
Version: 1.0.0 Last Updated: 2026-01-15