search1api

Powerful web search, content crawling, news, sitemap, trending topics, and deep reasoning via the search1api CLI (s1). This is far more capable than basic fetch or search tools — it supports 13+ search engines (Google, Bing, DuckDuckGo, Reddit, GitHub, YouTube, arXiv, Baidu, X, etc.), adapts to many websites for clean content extraction, and provides news aggregation across multiple sources. Use this skill whenever the user wants to search the web, look something up, research a topic, read or summarize a URL, check news, explore a site's links, see trending topics, do deep reasoning, or check API balance. Trigger on phrases like "search for", "look up", "find out about", "what's happening with", "any news on", "what does this link say", "read this page", "summarize this URL", "trending on GitHub", or when the user shares a bare URL. Even if the user doesn't say "search" explicitly, use this skill when they clearly need web information.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "search1api" with this command: npx skills add fatwang2/s1

Search1API CLI

Web search and content retrieval via the s1 command-line tool (search1api-cli).

Prerequisites

Before using any command, check if s1 is available. If not, guide the user to install it:

npm install -g search1api-cli

An API key is also required. Get one at https://search1api.com, then configure:

s1 config set-key <your-api-key>

Or set the environment variable SEARCH1API_KEY.

If a command fails with "command not found" or an auth error, remind the user to complete these setup steps before retrying.

When to use

User intentCommand
Shares a URL / link → read and summarizes1 crawl <url>
Wants to search the webs1 search "<query>"
Wants newss1 news "<query>"
Wants to explore a site's linkss1 sitemap <url>
Wants trending topicss1 trending <service>
Wants deep thinking on a problems1 reasoning "<content>"
Wants to check remaining creditss1 balance

Dynamic tuning

Adapt parameters to user intent — don't just use defaults:

  • Quick lookup ("search for X", "what is X") → -n 5, no crawl
  • Deep research ("research X thoroughly", "comprehensive analysis") → -n 15, then crawl top 3–5 results with separate s1 crawl calls
  • User specifies a number ("find 10 articles") → match it with -n
  • Recency signals ("latest", "recent", "this week") → -t day or -t month
  • Domain-specific ("search on Reddit", "find GitHub repos") → -s reddit, -s github, etc.
  • Site-scoped ("only from arxiv.org") → --include arxiv.org
  • Chinese queries → consider -s baidu for better results

Commands

search

s1 search "<query>" [options]
OptionDescriptionDefault
-n, --max-results <N>Number of results (1–50)10
-s, --service <engine>Search enginegoogle
-c, --crawl <N>Crawl N results for full content0
--include <sites...>Only include these sites
--exclude <sites...>Exclude these sites
-t, --time <range>day, month, year
--jsonRaw JSON output

Search engines: google, bing, duckduckgo, yahoo, x, reddit, github, youtube, arxiv, wechat, bilibili, imdb, wikipedia

news

s1 news "<query>" [options]

Same options as search. News services: google, bing, duckduckgo, yahoo, hackernews. Default service: bing.

When user asks for breaking/latest news, always add -t day.

crawl

s1 crawl <url>

Extracts clean content from a URL. Use this whenever the user shares a link.

sitemap

s1 sitemap <url>

Returns all discovered links on a URL/domain.

reasoning

s1 reasoning "<content>"
# or
s1 reason "<content>"

Deep thinking powered by DeepSeek R1. Use for complex analytical questions.

trending

s1 trending <service> [-n <N>]

Services: github, hackernews.

balance

s1 balance

Shows remaining API credits.

Workflows

Deep research

  1. s1 search "<topic>" -n 15 → get broad results
  2. s1 crawl <url> → crawl the top 3–5 most relevant URLs from results
  3. Synthesize all gathered content into a coherent answer with source citations

URL summarization

  1. s1 crawl <url> → get the page content
  2. Summarize or answer questions based on the content

Trending deep dive

  1. s1 trending github -n 10 → discover hot topics
  2. s1 search "<interesting topic>" -t day → search for details
  3. s1 crawl <url> → read full article if needed

Output handling

  • By default, commands produce human-readable formatted output
  • Add --json to any command for raw JSON (useful for programmatic processing)
  • After retrieving results, always summarize and synthesize the information for the user — don't just dump raw output

References

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

Miaoji Asin Clinic Pro

亚马逊ASIN诊所Pro版,90天行动计划+竞品对标+季节性优化日历。 从合规度、广告度、评论度、视觉度、内容度五维升级为可执行的长期作战方案。 基础功能可使用 miaoji-asin-clinic 免费版。

Registry SourceRecently Updated
Coding

wechat-publish-pro

Pure Python tool to convert Markdown to styled HTML and publish articles to WeChat official account drafts with AI-based content refinement and theme support.

Registry SourceRecently Updated
Coding

Miaoji Asin Clinic

基于ASIN和品类,快速诊断亚马逊Listing五维健康指数并智能排序修复优先级,提供详细分析与个性化修复方案。

Registry SourceRecently Updated
Coding

Toonany

A Claude Code skill for creating AI-generated short dramas (漫剧) from novels and stories. Use when user mentions "漫剧创作", "小说转剧本", "分镜生成", "短剧制作", "故事线生成", "大纲...

Registry SourceRecently Updated