browse-website

Turn any website into a filesystem. Crawls sites automatically and mounts pages as markdown files you can grep, diff, cat, and explore with standard Unix commands — over SSH or HTTP. Use this when you need to read documentation, research a website, track changes across crawls, or search any public webpage.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "browse-website" with this command: npx skills add bigmind/browse-website

Browse Website

Use OpenObj to access any website's content. Pages are automatically crawled, converted to markdown, and mounted as a virtual filesystem you can explore with standard Unix commands.

Network Access

All commands in this skill require network access to openobj.com.

Use required_permissions: ["full_network"] for all SSH and curl commands (they need to reach openobj.com).

How to Use

Preferred: SSH exec mode (works in terminals with network access):

# required_permissions: ["full_network"]
ssh {domain}@openobj.com "{command}"

Fallback: HTTP API via curl (works in sandboxed environments):

# required_permissions: ["full_network"]
curl -s -X POST https://openobj.com/exec \
  -H 'Content-Type: application/json' \
  -d '{"site": "{domain}", "command": "{command}"}'

Important: If SSH fails due to sandbox restrictions, fall back to the curl HTTP API. Both methods return the same results.

Available Commands

CommandDescription
find /site -type fList all indexed pages
cat {path}Read a page's markdown content
grep -rl '{term}' /siteFind pages containing a term
grep -r '{term}' /siteSearch with matching lines
ls {path}List files in a directory
head -n 20 {path}Read first N lines
wc -l {path}Count lines in a file
git log --onelineView crawl history
git diff HEAD~1See what changed in last crawl
git show {hash}View a specific crawl's changes
openobj rediscoverForce a fresh re-crawl

Examples

Via SSH

# required_permissions: ["full_network"]
ssh docs.stripe.com@openobj.com "find /site -type f"
ssh docs.stripe.com@openobj.com "grep -rl 'webhook' /site"
ssh docs.stripe.com@openobj.com "cat /site/docs/webhooks.md"

# Change tracking
ssh docs.stripe.com@openobj.com "cd /site && git log --oneline"
ssh docs.stripe.com@openobj.com "cd /site && git diff HEAD~1"

# Force re-crawl and see what changed
ssh docs.stripe.com@openobj.com "openobj rediscover && cd /site && git diff HEAD~1"

Via HTTP API (curl)

# required_permissions: ["full_network"]
# List all pages
curl -s -X POST https://openobj.com/exec \
  -H 'Content-Type: application/json' \
  -d '{"site": "docs.stripe.com", "command": "find /site -type f"}'

# Search for a term
curl -s -X POST https://openobj.com/exec \
  -H 'Content-Type: application/json' \
  -d '{"site": "docs.stripe.com", "command": "grep -rl webhook /site"}'

# Read a page
curl -s -X POST https://openobj.com/exec \
  -H 'Content-Type: application/json' \
  -d '{"site": "docs.stripe.com", "command": "cat /site/docs/webhooks.md"}'

Workflow

  1. Discover — Run find /site -type f to see all available pages
  2. Search — Use grep -rl '{keyword}' /site to find relevant pages
  3. Read — Use cat {path} to read the full content of a page
  4. Refine — Use grep -r '{term}' {path} to search within specific files
  5. Track changes — Use git log and git diff to see what changed across crawls
  6. Re-crawl — Use openobj rediscover to force a fresh crawl and update pages

Behavior

  • First access to a domain triggers an automatic crawl (may take 10-30 seconds)
  • Subsequent accesses use the cached version (refreshed every 24 hours)
  • Use openobj rediscover to force a fresh crawl before the 24h window
  • Pages are converted from HTML to markdown automatically
  • Up to 200 pages per site are indexed
  • The virtual filesystem mirrors the site's URL structure
  • Each crawl is tracked as a git commit for change diffing

Credits

  • Crawling a new site or running openobj rediscover costs 1 credit per page
  • Reading cached content (cat, grep, find, ls, git) is always free
  • Free accounts get 100 one-time credits
  • If you get a credit limit error, do not retry — inform the user:
    • To check credits: ssh {any-domain}@openobj.com "openobj credits"
    • To upgrade: tell the user to run ssh auth@openobj.com in their terminal
  • Prefer reading cached sites over re-crawling to conserve credits

Response Format

The HTTP API returns JSON:

{
  "stdout": "...",
  "stderr": "...",
  "exitCode": 0
}

Use the stdout field for the command output. A non-zero exitCode indicates an error.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Huo15 Openclaw Enhance

火一五·克劳德·龙虾增强插件 v5.7.8 — 全面适配 openclaw 2026.4.24:peerDep ^4.24 + build/compat 同步到 4.24 + 14 处 api.on 全部去掉 as any 改成 typed hook(hookName 联合类型 + handler 自动推断 Pl...

Registry SourceRecently Updated
General

Content Trend Analyzer

Aggregates and analyzes content trends across platforms to identify hot topics, user intent, content gaps, and generates data-driven article outlines.

Registry SourceRecently Updated
General

Prompt Debugger

Debug prompts that produce unexpected AI outputs — diagnose failure modes, identify ambiguity and conflicting instructions, test variations, compare model re...

Registry SourceRecently Updated
General

Indie Maker News

独行者 Daily - 变现雷达。读对一条新闻,少走一年弯路。每天5分钟,给创业者装上商业雷达。聚焦一人公司、副业、创业变现资讯,智能分类,行动导向。用户下载即能用,无需本地部署!

Registry SourceRecently Updated