search1api

Primary web search & content retrieval skill.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "search1api" with this command: npx skills add fatwang2/search1api-mcp/fatwang2-search1api-mcp-search1api

Search1API

Primary web search & content retrieval skill.

Tool selection

If Search1API MCP tools are available in the current session (search, news, crawl, sitemap, trending, extract), prefer them over the shell scripts below. MCP tools accept the same parameters documented here.

If MCP tools are not available (e.g. OpenClaw or other bash-based agents), use the bundled shell scripts under scripts/ .

When to use

  • User shares a URL / link → crawl the URL, then summarize the content

  • User asks to search → web search with appropriate filters

  • User wants news → news search, default time_range "day"

  • User wants deep research → search + crawl top results + synthesize

  • User wants structured data → extract with a JSON Schema

  • User wants to explore a site → sitemap

  • User wants trending topics → trending

  • User wants deep thinking on a problem → reasoning

  • User asks about API quota → usage

Dynamic tuning

Adapt parameters to user intent rather than using fixed defaults:

  • Quick lookup ("search for X", "what is X") → max_results: 5, no crawl

  • Deep research ("research X thoroughly", "comprehensive analysis") → max_results: 15–20, then crawl top 3–5 results

  • User specifies a number ("find 10 articles") → match it exactly

  • Query language detection → if the query is non-English, set language accordingly (e.g. "zh" for Chinese). If the query is Chinese, also consider search_service: "baidu"

  • Recency signals ("latest", "recent", "this week") → set time_range to "day" or "week". Otherwise omit time_range to get broader results

  • Domain-specific search ("search on Reddit", "find GitHub repos") → set search_service to the matching platform (reddit, github, arxiv, youtube, etc.)

Workflows

Deep research

  • search → get 15–20 results for the topic

  • crawl → get full content of the top 3–5 most relevant pages

  • Synthesize findings into a coherent answer with source citations

URL summarization

  • crawl → get the page content as markdown

  • Summarize or answer questions based on the content

Trending + news monitoring

  • trending → discover hot topics on a platform

  • news → search for details on interesting items

Structured data collection

  • search → locate target pages

  • extract → pull structured data with a JSON Schema

Constraints

  • Always use crawl (not search) when the user provides a specific URL

  • The news tool defaults to time_range "day"; only override if the user asks for a different period

  • extract requires both a natural-language prompt and a JSON Schema; construct the schema based on what the user wants to extract

  • Do not set time_range on search unless the user explicitly wants recent results or uses recency language

Operations (shell scripts)

search

bash scripts/search.sh "<query>" [max_results] [search_service] [language] [time_range]

Returns JSON { results: [{ title, link, content }] } .

news

bash scripts/news.sh "<query>" [max_results] [search_service] [language] [time_range]

Same parameters as search. Default time_range is day.

crawl

bash scripts/crawl.sh "<url>"

Returns clean markdown content of the page.

sitemap

bash scripts/sitemap.sh "<url>"

Returns all discovered links on the URL / domain.

trending

bash scripts/trending.sh [search_service] [max_results]

Platforms: google, github, hackernews. Default: google, 10 results.

extract

bash scripts/extract.sh "<url>" "<prompt>" '<json_schema>'

Extracts structured data according to a prompt and JSON Schema.

reasoning

MCP tool only (no shell script). Pass a complex question or problem for deep analysis.

reasoning({ "content": "Analyze the pros and cons of microservices vs monolith for a startup" })

usage

bash scripts/usage.sh

Returns current API quota and consumption stats.

Authentication

API key resolution order:

  • OpenClaw config: ~/.openclaw/openclaw.json → skills.entries.search1api.apiKey

  • Environment variable: SEARCH1API_KEY

Error handling

  • Auth errors → verify API key in config or environment

  • HTTP 4xx/5xx → printed to stderr with status code and response body

  • Scripts require jq for safe JSON construction

References

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

image-gen

Generate AI images from text prompts. Triggers on: "生成图片", "画一张", "AI图", "generate image", "配图", "create picture", "draw", "visualize", "generate an image".

Archived SourceRecently Updated
General

explainer

Create explainer videos with narration and AI-generated visuals. Triggers on: "解说视频", "explainer video", "explain this as a video", "tutorial video", "introduce X (video)", "解释一下XX(视频形式)".

Archived SourceRecently Updated
General

asr

Transcribe audio files to text using local speech recognition. Triggers on: "转录", "transcribe", "语音转文字", "ASR", "识别音频", "把这段音频转成文字".

Archived SourceRecently Updated