playwright-scraper

This skill enables web scraping using Playwright, a Node.js library for browser automation. It focuses on handling dynamic content, authentication flows, pagination, data extraction, and screenshots to reliably scrape modern websites.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "playwright-scraper" with this command: npx skills add alphaonedev/openclaw-graph/alphaonedev-openclaw-graph-playwright-scraper

playwright-scraper

Purpose

This skill enables web scraping using Playwright, a Node.js library for browser automation. It focuses on handling dynamic content, authentication flows, pagination, data extraction, and screenshots to reliably scrape modern websites.

When to Use

Use this skill for scraping sites with JavaScript-rendered content (e.g., React or Angular apps), sites requiring login (e.g., dashboards), handling multi-page results (e.g., search results), or capturing visual data (e.g., screenshots for verification). Avoid for static HTML sites where simpler tools like requests suffice.

Key Capabilities

  • Dynamically load and interact with content using Playwright's browser control.

  • Manage authentication flows, such as logging in via forms or API tokens.

  • Handle pagination by navigating pages, clicking "next" buttons, or parsing URLs.

  • Extract data using selectors, with options for JSON output or file saves.

  • Capture screenshots or full-page PDFs for debugging or reporting.

  • Supports headless or visible browser modes for flexibility.

Usage Patterns

Always initialize a browser context first, then create pages for navigation. Use async patterns for reliability. For authenticated scraping, handle cookies or sessions per context. Structure scripts to loop through pages for pagination and use try-catch for flaky elements. Pass configurations via JSON files or environment variables for reusability.

Common Commands/API

Use Playwright's Node.js API. Install via npm install playwright . Key methods include:

  • Launch browser: const browser = await playwright.chromium.launch({ headless: true });

  • Navigate page: const page = await browser.newPage(); await page.goto('https://example.com');

  • Handle auth: await page.fill('#username', process.env.USERNAME); await page.fill('#password', process.env.PASSWORD); await page.click('#login');

  • Extract data: const data = await page.evaluate(() => document.querySelector('#target').innerText); console.log(data);

  • Pagination: while (await page.$('#next-button')) { await page.click('#next-button'); await page.waitForSelector('.item'); }

  • Take screenshot: await page.screenshot({ path: 'screenshot.png' });

CLI flags for running scripts: Use npx playwright test with flags like --headed for visible mode or --timeout 30000 for extended waits.

Integration Notes

Integrate by importing Playwright in Node.js projects. For auth, use environment variables like $PLAYWRIGHT_USERNAME and $PLAYWRIGHT_PASSWORD to avoid hardcoding. Configuration format: Use a JSON file for settings, e.g., { "url": "https://target.com", "selector": "#data-element" } . Pass it via script args: node scraper.js --config config.json . For larger systems, chain with tools like Puppeteer (if migrating) or export data to databases via page.evaluate results. Ensure compatibility with Node.js 14+ and handle proxy settings with browser.launch({ proxy: { server: 'http://myproxy.com:8080' } }) .

Error Handling

Anticipate common errors like timeout on dynamic loads or selector failures. Use page.waitForSelector with timeouts: await page.waitForSelector('#element', { timeout: 10000 }).catch(err => console.error('Element not found:', err)); . For network issues, wrap page.goto in try-catch: try { await page.goto(url, { waitUntil: 'networkidle' }); } catch (e) { console.error('Navigation failed:', e.message); await browser.close(); } . Handle authentication failures by checking for error elements: if (await page.$('#error-message')) { throw new Error('Login failed'); } . Log errors with details and retry up to 3 times using a loop.

Concrete Usage Examples

  • Scraping a logged-in dashboard: First, set env vars: export PLAYWRIGHT_USERNAME='user@example.com' and export PLAYWRIGHT_PASSWORD='securepass' . Then, run: const browser = await playwright.chromium.launch(); const page = await browser.newPage(); await page.goto('https://dashboard.com/login'); await page.fill('#username', process.env.PLAYWRIGHT_USERNAME); await page.fill('#password', process.env.PLAYWRIGHT_PASSWORD); await page.click('#submit'); const data = await page.evaluate(() => document.querySelector('#dashboard-data').innerText); console.log(data); await browser.close(); This extracts data from a protected page.

  • Handling pagination on a search site: Script: const browser = await playwright.chromium.launch(); const page = await browser.newPage(); await page.goto('https://search.com?q=query'); let items = []; while (true) { items.push(...await page.$$eval('.result-item', elements => elements.map(el => el.innerText))); const nextButton = await page.$('#next-page'); if (!nextButton) break; await nextButton.click(); await page.waitForTimeout(2000); } console.log(items); await browser.close(); This collects results across multiple pages.

Graph Relationships

  • Related to: "selenium-automation" (alternative browser automation tool)

  • Depends on: "node-runtime" (for Playwright execution)

  • Complements: "data-extraction" (for post-processing scraped data)

  • In cluster: "community" (shared with other open-source tools)

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

arxiv-paper-writer

Use this skill whenever the user wants Claude Code to write, scaffold, compile, debug, or review an arXiv-style academic paper, especially survey papers with LaTeX, BibTeX citations, TikZ figures, tables, and PDF output. This skill should trigger for requests like writing a full paper, creating an arXiv paper project, turning a research topic into a LaTeX manuscript, reproducing the Paper-Write-Skill-Test agent-survey workflow, or setting up a Windows/Linux Claude Code paper-writing loop.

Archived SourceRecently Updated
Coding

cli-proxy-troubleshooting

排查 CLI Proxy API(codex-api-proxy)的配置、认证、模型注册和请求问题。适用场景包括:(1) AI 请求报错 unknown provider for model, (2) 模型列表中缺少预期模型, (3) codex-api-key/auth-dir 配置不生效, (4) CLI Proxy 启动后 AI 无法调用, (5) 认证成功但请求失败或超时。包含源码级排查方法:模型注册表架构、认证加载链路、 SanitizeCodexKeys 规则、常见错误的真实根因。

Archived SourceRecently Updated
Coding

visual-summary-analysis

Performs AI analysis on input video clips/image content and generates a smooth, natural scene description. | 视觉摘要智述技能,对传入的视频片段/图片内容进行AI分析,生成一段通顺自然的场景描述内容

Archived SourceRecently Updated
Coding

frontend-skill

全能高级前端研发工程师技能。擅长AI时代前沿技术栈(React最新 + shadcn/ui + Tailwind CSS v4 + TypeScript + Next.js),精通动效库与交互特效开发。采用Glue Code风格快速实现代码,强调高质量产品体验与高度友好的UI视觉规范。在组件调用、交互特效、全局Theme上保持高度规范:绝不重复造轮子,相同逻辑出现两次即封装为组件。具备安全意识,防范各类注入攻击。开发页面具有高度自适应能力,响应式设计贯穿始终。当用户无特殊技术栈要求时,默认采用主流前沿技术栈。

Archived SourceRecently Updated