net-deep-research

Perform deep multi-source internet research before answering. Use when the user prefixes a request with /net, asks for the latest information, wants real-time facts, requests web verification, asks which framework/tool/product is best right now, or needs evidence-based answers synthesized from multiple public sources such as official docs, official sites, GitHub, package registries, standards sites, and other stable public references.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "net-deep-research" with this command: npx skills add h4444433333/net-deep-research

Net Deep Research

When this skill is triggered, do not answer immediately.

Your job is to turn the user's request into a controlled research workflow:

  1. classify the question,
  2. generate complementary search queries,
  3. prefer stable public sources,
  4. extract evidence for concrete claims,
  5. resolve or expose conflicts,
  6. answer from an internal evidence map.

Trigger Handling

If the user message starts with /net:

  • remove the /net prefix
  • trim whitespace
  • treat the remainder as the actual research question

Then restate the question in one sentence before researching.

Goal

Produce answers that are:

  • current
  • evidence-based
  • multi-source
  • explicit about uncertainty
  • grounded in broadly stable public sources

Do not rely on one weak page for an important claim.

Hard Rules

Apply these rules strictly:

  1. For predictive, forward-looking, market, macro, or scenario questions, separate the answer into two layers:
    • Verified Facts
    • Inference
  2. Every core conclusion must be tied to at least one primary source whenever possible.
  3. Secondary media, commentary, or community sources must not be the only support for a key conclusion.
  4. If direct official fetching fails, use a fixed fallback order instead of ad hoc substitution.

Mode Selection

Choose one primary_mode. Add one secondary_mode only if it clearly helps.

Mode A: Current Fact Check

Use for questions about:

  • latest status
  • current availability
  • recent releases
  • whether something is already live

Typical cues:

  • latest
  • now
  • currently
  • as of today
  • recently
  • launched
  • released

Mode B: Capability Or Compatibility Verification

Use for questions about:

  • whether something supports a feature
  • whether two things are compatible
  • supported versions, models, platforms, or plans

Typical cues:

  • support
  • compatible
  • can it
  • does it work with
  • available on

Mode C: Implementation Or How-To Research

Use for questions about:

  • how to build something
  • how to integrate or deploy something
  • best practices
  • architecture or implementation paths

Typical cues:

  • how to
  • implement
  • build
  • integrate
  • deploy
  • best practice

Mode D: Comparison, Selection, Or Policy Confirmation

Use for questions about:

  • which option is better
  • framework or tool selection
  • differences between alternatives
  • policy, institution, or official rules

Typical cues:

  • best
  • compare
  • vs
  • difference
  • choose
  • policy
  • official rule

Classification Rules

Apply these rules in order:

  1. If the question is about how to implement, integrate, deploy, or build, choose Mode C.
  2. If the question is about comparing options, choosing the best option, or checking policy or official rules, choose Mode D.
  3. If the question is about support, compatibility, or whether a feature exists, choose Mode B.
  4. If the question is about the latest or current status of a fact, choose Mode A.

Use a secondary mode only when both are necessary:

  • Mode A + Mode B: current support status
  • Mode B + Mode C: whether possible, then how to implement
  • Mode D + Mode C: choose a solution, then outline implementation

Question Normalization

Before searching, extract:

  • subject
  • target_capability if any
  • time_scope if provided
  • region_scope if provided
  • version_scope if provided

Do not invent missing scopes.

Then rewrite the request as one normalized question.

Claim Extraction

Break the request into at most 3 critical claims.

Examples:

  • whether the capability exists
  • when the capability became available
  • what scope or limitations apply
  • which option is the best fit for the user's goal

Every important conclusion in the final answer should map back to one of these claims.

Query Planning

For each important claim, generate these core query slots:

  • direct_query
  • official_query
  • release_query
  • contradiction_query

Add one mode-specific slot:

  • Mode A -> recent_query
  • Mode B -> compatibility_query
  • Mode C -> implementation_query
  • Mode D -> comparison_query or policy_query

Keep the total query count between 4 and 8 for a normal request.

Source Routing

Use source families, not fixed websites, as the primary routing method.

For predictive, market, macro, or outlook questions:

  • treat official, primary, and directly published data as the evidence base
  • treat secondary reports only as interpretation layers
  • do not let commentary outrank direct data

Mode A Priority

  1. official announcement, changelog, release notes
  2. official docs
  3. official repository releases
  4. high-quality secondary reporting

Mode B Priority

  1. official docs
  2. API reference or SDK docs
  3. official repository, release, or issue
  4. package registry pages

Mode C Priority

  1. official docs
  2. official repository README, examples, guides
  3. package registry pages
  4. stable technical references

Mode D Priority

  1. official docs or official sites
  2. government, institutional, or standards sources when relevant
  3. official repository, pricing, feature, or explanation pages
  4. high-quality secondary analysis

Preferred Source Families

Prefer these source families when relevant:

  • official documentation sites
  • official company or organization sites
  • official changelogs and release notes
  • GitHub repositories and releases
  • package registries such as PyPI and npm
  • standards sites such as RFC, IETF, and W3C
  • government and institutional sites
  • stable technical references such as MDN

Accessibility And Stability Rules

Prefer sources that are:

  • public
  • readable without login
  • likely to remain available
  • broadly reachable for both international and China-based users when possible

Avoid depending on:

  • login-gated content
  • short-form social posts
  • low-signal community threads as the only evidence
  • content farms or SEO spam pages
  • unattributed reposts

If direct official fetching fails, use this fixed fallback order and do not skip steps:

  • official page -> official mirror or official alternate page -> official changelog or release note -> official GitHub or official repository page -> package registry or standards page -> stable technical reference
  • government or institution page -> official FAQ -> official press release -> official transcript or bulletin -> high-quality institutional analysis

Do not jump straight from an unavailable official source to media commentary if stronger fallback layers still exist.

Source Filtering

Reject a source as key evidence if it:

  • requires login for the core content
  • does not clearly support any claim
  • is only a repost without the original source
  • is obviously low quality or SEO-generated

Source Scoring

Score each candidate source across 5 dimensions, each from 0 to 2:

  • authority
  • stability
  • accessibility
  • freshness
  • relevance

Total score range: 0-10

Minimum rules:

  • do not use a source with total score below 4 as key evidence
  • every important claim should have at least one source with both:
    • authority >= 1
    • relevance >= 1
  • every core conclusion should be anchored to at least one primary source whenever possible
  • do not let secondary media be the only support for a key conclusion when a stronger source family is available

Evidence Extraction

For each claim, extract evidence items with:

  • claim id
  • source title
  • source URL
  • source date hint if available
  • evidence snippet
  • source score
  • stance: support, oppose, or partial

Do not over-quote. Extract only the part needed to support the claim.

Conflict Handling

If a claim has both supporting and opposing evidence, explicitly mark it as conflicted.

Only use these conflict causes:

  • version difference
  • timing difference
  • region difference
  • plan tier difference
  • wording ambiguity
  • evidence insufficiency

Do not invent a conflict explanation without support.

Confidence Rules

Assign confidence per key claim:

High

  • at least 2 supporting sources
  • at least 1 strong primary source
  • no major unresolved conflict

Medium

  • at least 1 reasonably strong source
  • some scope limitation or minor conflict

Low

  • only weak support
  • or unresolved conflict
  • or no clear primary source

Evidence Map

Before writing the answer, build this internal structure:

  • question_restatement
  • primary_mode
  • secondary_mode if any
  • claims
  • supporting_sources
  • conflicts
  • uncertainties
  • answer_outline

For predictive, market, macro, or outlook questions, the evidence map must also separate:

  • verified_facts
  • inference

Do not skip this step.

Final Answer Format

Default section order:

  1. Question Restatement
  2. Short Answer
  3. Key Findings
  4. Cross-Source Notes
  5. Uncertainties or Limits
  6. Sources

For predictive, market, macro, or outlook questions, use this stricter order:

  1. Question Restatement
  2. Short Answer
  3. Verified Facts
  4. Inference
  5. Cross-Source Notes
  6. Uncertainties or Limits
  7. Sources

Writing Rules

In Short Answer:

  • answer directly
  • keep it concise

In Key Findings:

  • separate confirmed facts from implications
  • prioritize evidence from official or primary sources

In Cross-Source Notes:

  • explain where sources agree
  • explain where they differ
  • mention version, timing, regional, or plan differences when relevant

In Verified Facts for predictive or outlook questions:

  • include only directly supported facts
  • keep interpretation minimal
  • attach stronger sources first

In Inference for predictive or outlook questions:

  • derive each inference from the verified facts above
  • do not present inference as confirmed fact
  • explicitly signal when the inference depends on policy, timing, or earnings assumptions

In Uncertainties or Limits:

  • clearly state what could not be verified
  • do not hide missing evidence

In Sources:

  • list the most useful sources, not every weak result

Fast Path

Use a fast path only when:

  • the question is simple
  • there is a clear primary source
  • there is little risk of ambiguity

Even then:

  • check the primary source
  • add one independent supporting source if practical

Example Handling Pattern

If the user asks:

  • /net What is the best agent framework right now, and use it to help me design a game?

Then:

  • classify as Mode D with Mode C secondary
  • compare current agent framework candidates using official docs, GitHub, releases, and stable public references
  • decide which framework best fits the requested goal
  • then outline a game-building workflow using that framework
  • clearly separate:
    • evidence for framework selection
    • implementation guidance for the game workflow

Final Reminder

Research first. Structure the evidence second. Answer last.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

Dalio Macro Analysis

This skill applies Ray Dalio's macro analysis framework to analyze global economic conditions, investment allocation, and geopolitical risks. Triggers includ...

Registry SourceRecently Updated
00Profile unavailable
Research

Gougoubi Arena Trade

Trade in the Gougoubi AI Trading Arena — a $10,000 simulated-USDT paper trading leaderboard fulfilled against real Binance / OKX / Hyperliquid order books. A...

Registry SourceRecently Updated
00Profile unavailable
Research

Thinkdeep

Structured reasoning protocol for Claude — forces step-by-step analysis, self-critique, and confidence scoring before answering. Reduces wrong answers and ha...

Registry SourceRecently Updated
00Profile unavailable
Research

股票实时行情分析器

A股/港股实时行情查询、基本面分析、深度报告生成与邮件发送一体化工具。触发场景:(1) 用户询问股票价格、市值、PE/PB等数据;(2) 用户要求分析某只或多只股票;(3) 用户要求生成股票分析报告;(4) 用户要求通过邮件发送股票报告。支持AkShare实时行情、聚宽基本面数据、QQ邮箱/Gmail发送。

Registry SourceRecently Updated
260Profile unavailable