xcrawl-crawl

Use this skill for XCrawl crawl tasks, including bulk site crawling, crawler rule design, async status polling, and delivery of crawl output for downstream scrape and search workflows.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "xcrawl-crawl" with this command: npx skills add wykings/xcrawl-crawl

XCrawl Crawl

Overview

This skill orchestrates full-site or scoped crawling with XCrawl Crawl APIs. Default behavior is raw passthrough: return upstream API response bodies as-is.

Required Local Config

Before using this skill, the user must create a local config file and write XCRAWL_API_KEY into it.

Path: ~/.xcrawl/config.json

{
  "XCRAWL_API_KEY": "<your_api_key>"
}

Read API key from local config file only. Do not require global environment variables.

Credits and Account Setup

Using XCrawl APIs consumes credits. If the user does not have an account or available credits, guide them to register at https://dash.xcrawl.com/. After registration, they can activate the free 1000 credits plan before running requests.

Tool Permission Policy

Request runtime permissions for curl and node only. Do not request Python, shell helper scripts, or other runtime permissions.

API Surface

  • Start crawl: POST /v1/crawl
  • Read result: GET /v1/crawl/{crawl_id}
  • Base URL: https://run.xcrawl.com
  • Required header: Authorization: Bearer <XCRAWL_API_KEY>

Usage Examples

cURL (create + result)

API_KEY="$(node -e "const fs=require('fs');const p=process.env.HOME+'/.xcrawl/config.json';const k=JSON.parse(fs.readFileSync(p,'utf8')).XCRAWL_API_KEY||'';process.stdout.write(k)")"

CREATE_RESP="$(curl -sS -X POST "https://run.xcrawl.com/v1/crawl" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${API_KEY}" \
  -d '{"url":"https://example.com","crawler":{"limit":100,"max_depth":2},"output":{"formats":["markdown","links"]}}')"

echo "$CREATE_RESP"

CRAWL_ID="$(node -e 'const s=process.argv[1];const j=JSON.parse(s);process.stdout.write(j.crawl_id||"")' "$CREATE_RESP")"

curl -sS -X GET "https://run.xcrawl.com/v1/crawl/${CRAWL_ID}" \
  -H "Authorization: Bearer ${API_KEY}"

Node

node -e '
const fs=require("fs");
const apiKey=JSON.parse(fs.readFileSync(process.env.HOME+"/.xcrawl/config.json","utf8")).XCRAWL_API_KEY;
const body={url:"https://example.com",crawler:{limit:300,max_depth:3,include:["/docs/.*"],exclude:["/blog/.*"]},request:{locale:"ja-JP"},output:{formats:["markdown","links","json"]}};
fetch("https://run.xcrawl.com/v1/crawl",{
  method:"POST",
  headers:{"Content-Type":"application/json",Authorization:`Bearer ${apiKey}`},
  body:JSON.stringify(body)
}).then(async r=>{console.log(await r.text());});
'

Request Parameters

Request endpoint and headers

  • Endpoint: POST https://run.xcrawl.com/v1/crawl
  • Headers:
  • Content-Type: application/json
  • Authorization: Bearer <api_key>

Request body: top-level fields

FieldTypeRequiredDefaultDescription
urlstringYes-Site entry URL
crawlerobjectNo-Crawler config
proxyobjectNo-Proxy config
requestobjectNo-Request config
js_renderobjectNo-JS rendering config
outputobjectNo-Output config
webhookobjectNo-Async callback config

crawler

FieldTypeRequiredDefaultDescription
limitintegerNo100Max pages
includestring[]No-Include only matching URLs (regex supported)
excludestring[]No-Exclude matching URLs (regex supported)
max_depthintegerNo3Max depth from entry URL
include_entire_domainbooleanNofalseCrawl full site instead of only subpaths
include_subdomainsbooleanNofalseInclude subdomains
include_external_linksbooleanNofalseInclude external links
sitemapsbooleanNotrueUse site sitemap

proxy

FieldTypeRequiredDefaultDescription
locationstringNoUSISO-3166-1 alpha-2 country code, e.g. US / JP / SG
sticky_sessionstringNoAuto-generatedSticky session ID; same ID attempts to reuse exit

request

FieldTypeRequiredDefaultDescription
localestringNoen-US,en;q=0.9Affects Accept-Language
devicestringNodesktopdesktop / mobile; affects UA and viewport
cookiesobject mapNo-Cookie key/value pairs
headersobject mapNo-Header key/value pairs
only_main_contentbooleanNotrueReturn main content only
block_adsbooleanNotrueAttempt to block ad resources
skip_tls_verificationbooleanNotrueSkip TLS verification

js_render

FieldTypeRequiredDefaultDescription
enabledbooleanNotrueEnable browser rendering
wait_untilstringNoloadload / domcontentloaded / networkidle
viewport.widthintegerNo-Viewport width (desktop 1920, mobile 402)
viewport.heightintegerNo-Viewport height (desktop 1080, mobile 874)

output

FieldTypeRequiredDefaultDescription
formatsstring[]No["markdown"]Output formats
screenshotstringNoviewportfull_page / viewport (only if formats includes screenshot)
json.promptstringNo-Extraction prompt
json.json_schemaobjectNo-JSON Schema

output.formats enum:

  • html
  • raw_html
  • markdown
  • links
  • summary
  • screenshot
  • json

webhook

FieldTypeRequiredDefaultDescription
urlstringNo-Callback URL
headersobject mapNo-Custom callback headers
eventsstring[]No["started","completed","failed"]Events: started / completed / failed

Response Parameters

Create response (POST /v1/crawl)

FieldTypeDescription
crawl_idstringTask ID
endpointstringAlways crawl
versionstringVersion
statusstringAlways pending

Result response (GET /v1/crawl/{crawl_id})

FieldTypeDescription
crawl_idstringTask ID
endpointstringAlways crawl
versionstringVersion
statusstringpending / crawling / completed / failed
urlstringEntry URL
dataobject[]Per-page result array
started_atstringStart time (ISO 8601)
ended_atstringEnd time (ISO 8601)
total_credits_usedintegerTotal credits used

data[] fields follow output.formats:

  • html, raw_html, markdown, links, summary, screenshot, json
  • metadata (page metadata)
  • traffic_bytes
  • credits_used
  • credits_detail

Workflow

  1. Confirm business objective and crawl boundary.
  • What content is required, what content must be excluded, and what is the completion signal.
  1. Draft a bounded crawl request.
  • Prefer explicit limits and path constraints.
  1. Start crawl and capture task metadata.
  • Record crawl_id, initial status, and request payload.
  1. Poll GET /v1/crawl/{crawl_id} until terminal state.
  • Track pending, crawling, completed, or failed.
  1. Return raw create/result responses.
  • Do not synthesize derived summaries unless explicitly requested.

Output Contract

Return:

  • Endpoint flow (POST /v1/crawl + GET /v1/crawl/{crawl_id})
  • request_payload used for the create request
  • Raw response body from create call
  • Raw response body from result call
  • Error details when request fails

Do not generate summaries unless the user explicitly requests a summary.

Guardrails

  • Never run an unbounded crawl without explicit constraints.
  • Do not present speculative page counts as final coverage.
  • Do not hardcode provider-specific tool schemas in core logic.
  • Highlight policy, legal, or website-usage risks when relevant.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Ai Competitor Analyzer

提供AI驱动的竞争对手分析,支持批量自动处理,提升企业和专业团队分析效率与专业度。

Registry SourceRecently Updated
General

Ai Data Visualization

提供自动化AI分析与多格式批量处理,显著提升数据可视化效率,节省成本,适用企业和个人用户。

Registry SourceRecently Updated
General

Ai Cost Optimizer

提供基于预算和任务需求的AI模型成本优化方案,计算节省并指导OpenClaw配置与模型切换策略。

Registry SourceRecently Updated