nanoserp

Free web search and page scraping via DuckDuckGo. Use when the agent needs to search the web for information, fetch and read webpage contents, or gather links from a page. Triggers on tasks requiring web search, web scraping, URL fetching, or internet research. No API keys or accounts needed.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "nanoserp" with this command: npx skills add fkodom/nanoserp/fkodom-nanoserp-nanoserp

nanoserp

A zero-config CLI and Python library for web search and page scraping via DuckDuckGo. No API keys required.

Setup

Install with pip (requires Python 3.11+):

pip install nanoserp

Or run directly without installing via uvx:

uvx nanoserp search "query"

CLI Usage

The CLI is invoked with nanoserp (or uvx nanoserp if not installed).

Search

nanoserp search "query"
nanoserp search "query" --date-filter w      # d=day, w=week, m=month, y=year
nanoserp search "query" --offset 10          # pagination

Output: numbered list of results with title, URL, date (if available), and snippet.

Scrape

nanoserp scrape "https://example.com"

Output: page content as markdown, followed by a list of extracted links.

Python Library Usage

from nanoserp import search, scrape, DateFilter

# Search
response = search("python web scraping")
for r in response.results:
    print(r.title, r.url, r.snippet)

# Search with date filter and pagination
page1 = search("query", date_filter=DateFilter.WEEK)
page2 = search("query", offset=len(page1.results), vqd=page1.vqd)

# Scrape
page = scrape("https://example.com")
print(page.markdown)
for link in page.links:
    print(link.text, link.url)

Function Signatures

search(query: str, *, offset: int = 0, date_filter: DateFilter | None = None,
       vqd: str | None = None, timeout: float = 10.0) -> SearchResponse

scrape(url: str, *, timeout: float = 10.0) -> ScrapeResponse

Models

  • SearchResponse: query, results: list[SearchResult], vqd: str | None
  • SearchResult: title, url, snippet, date: datetime | None
  • ScrapeResponse: url, markdown, links: list[ScrapeLink]
  • ScrapeLink: text, url
  • DateFilter: DAY, WEEK, MONTH, YEAR

Error Handling

All exceptions inherit from NanoserpError (with .message attribute). The most common is RateLimitError (HTTP 429) when DuckDuckGo throttles requests.

from nanoserp.exceptions import RateLimitError, NanoserpError

try:
    result = search("test")
except RateLimitError:
    # back off and retry
except NanoserpError as e:
    print(e.message)

When to Use

  • Web search: finding documentation, current information, or researching a topic
  • Page scraping: reading webpage content, extracting links, fetching reference material
  • Combining both: search to find relevant URLs, then scrape to read their content

Tips

  • Pass vqd from a previous SearchResponse when paginating to avoid redundant token requests.
  • DuckDuckGo may rate-limit heavy use. Catch RateLimitError and back off.
  • Scrape returns markdown, which is compact and easy to parse or summarize.
  • The --date-filter flag is useful for finding recent information.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

bot-wordle

No summary provided by upstream source.

Repository SourceNeeds Review
Research

learn-anything-in-one-hour

Teach users any new skill/knowledge X in ~1 hour using a fixed 4-step workflow optimized for complete beginners, focusing on 80/20 rule for maximum value in minimum time. Triggers when user asks to learn something new quickly, or mentions "learn X in one hour".

Archived SourceRecently Updated
Research

X/Twitter Research

# X/Twitter Research Skill

Archived SourceRecently Updated
Research

council

Convene the Council of High Intelligence — multi-persona deliberation with historical thinkers for deeper analysis of complex problems.

Archived SourceRecently Updated