deep-research

[IMPORTANT] Use TaskCreate to break ALL work into small tasks BEFORE starting.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "deep-research" with this command: npx skills add duc01226/easyplatform/duc01226-easyplatform-deep-research

[IMPORTANT] Use TaskCreate to break ALL work into small tasks BEFORE starting.

Prerequisites: MUST READ before executing:

  • .claude/skills/shared/web-research-protocol.md

External Memory: For complex or lengthy work (research, analysis, scan, review), write intermediate findings and final results to a report file in plans/reports/ — prevents context loss and serves as deliverable.

Evidence Gate: MANDATORY IMPORTANT MUST — every claim, finding, and recommendation requires file:line proof or traced evidence with confidence percentage (>80% to act, <80% must verify first).

Quick Summary

Goal: Deep-dive into top sources, extract key findings, cross-validate claims, build structured evidence base.

Workflow:

  • Read source map — Load output from web-research step

  • Fetch top sources — WebFetch top 5-8 Tier 1-2 sources

  • Extract findings — Pull key facts, data points, quotes

  • Cross-validate — Compare findings across sources

  • Build evidence base — Structured findings with confidence scores

Key Rules:

  • Maximum 8 WebFetch calls per invocation

  • Every finding must cite specific source

  • Conflicting claims → present both, flag discrepancy

Be skeptical. Apply critical thinking, sequential thinking. Every claim needs traced proof, confidence percentages (Idea should be more than 80%).

Deep Research

Step 1: Load Source Map

Read the source map from .claude/tmp/_sources-{slug}.md (output of web-research step).

Prioritize sources for deep-dive:

  • Tier 1-2 sources first

  • High-relevance sources

  • Sources covering identified gaps

Step 2: Fetch Top Sources

For each priority source (max 8):

  • Run WebFetch with the URL

  • Extract: key claims, data points, quotes, methodology

  • Note: publication date, author credentials, source type

Step 3: Extract Findings

For each source, extract:

  • Key claims — factual statements with specific data

  • Data points — numbers, percentages, dates

  • Quotes — notable expert statements

  • Methodology — how data was gathered (for market reports)

Step 4: Cross-Validate

Compare findings across sources:

  • Agreement — 2+ sources say the same thing → high confidence

  • Discrepancy — sources disagree → note both positions

  • Unique — only 1 source → mark as "single source, unverified"

Step 5: Build Evidence Base

Write to .claude/tmp/_evidence-{slug}.md :

Evidence Base: {Topic}

Date: {date} Sources analyzed: {count}

Findings

Finding 1: {Title}

Confidence: {95%|80%|60%|<60%} Sources: [1], [3] Content: {finding with inline citations} Cross-validation: {agreement/discrepancy notes}

Unresolved Discrepancies

  • {claim X from source A vs claim Y from source B}

Gaps Remaining

  • {what couldn't be verified}

IMPORTANT Task Planning Notes (MUST FOLLOW)

  • Always plan and break work into many small todo tasks using TaskCreate

  • Always add a final review todo task to verify work quality and identify fixes/enhancements

Workflow Recommendation

IMPORTANT MUST: If you are NOT already in a workflow, use AskUserQuestion to ask the user:

  • Activate research workflow (Recommended) — web-research → deep-research → synthesis → review

  • Execute /deep-research directly — run this skill standalone

Next Steps

MANDATORY IMPORTANT MUST after completing this skill, use AskUserQuestion to recommend:

  • "/business-evaluation (Recommended)" — Evaluate business viability from research

  • "/knowledge-synthesis" — If synthesizing research report

  • "Skip, continue manually" — user decides

Closing Reminders

MANDATORY IMPORTANT MUST break work into small todo tasks using TaskCreate BEFORE starting. MANDATORY IMPORTANT MUST validate decisions with user via AskUserQuestion — never auto-decide. MANDATORY IMPORTANT MUST add a final review todo task to verify work quality.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

research

No summary provided by upstream source.

Repository SourceNeeds Review
Research

market-analysis

No summary provided by upstream source.

Repository SourceNeeds Review
Research

domain-analysis

No summary provided by upstream source.

Repository SourceNeeds Review