firecrawl-cost-tuning

Firecrawl Cost Tuning

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "firecrawl-cost-tuning" with this command: npx skills add jeremylongshore/claude-code-plugins-plus-skills/jeremylongshore-claude-code-plugins-plus-skills-firecrawl-cost-tuning

Firecrawl Cost Tuning

Overview

Reduce Firecrawl web scraping costs by limiting crawl depth, caching scraped content, and choosing the right scrape mode. Firecrawl charges credits per page: 1 credit for single-page scrape, variable credits for crawl jobs (each discovered page costs credits).

Prerequisites

  • Firecrawl account with credit balance visibility

  • Understanding of your scraping patterns (single page vs crawl)

  • Cache infrastructure for storing scraped content

Instructions

Step 1: Always Set Crawl Limits

set -euo pipefail

BAD: Unbounded crawl (could consume thousands of credits)

curl -X POST https://api.firecrawl.dev/v1/crawl
-d '{"url": "https://docs.example.com"}'

GOOD: Bounded crawl with depth and page limits

curl -X POST https://api.firecrawl.dev/v1/crawl
-H "Authorization: Bearer $FIRECRAWL_API_KEY"
-d '{ "url": "https://docs.example.com", "maxDepth": 2, "limit": 50, "scrapeOptions": {"formats": ["markdown"]} }'

Saves: potentially hundreds or thousands of credits per crawl

Step 2: Use Scrape for Known URLs Instead of Crawl

// If you know the specific pages you need, scrape individually instead of crawling const targetUrls = [ 'https://docs.example.com/api/authentication', 'https://docs.example.com/api/endpoints', 'https://docs.example.com/api/errors', ];

// Scrape 3 specific pages: 3 credits for (const url of targetUrls) { await firecrawl.scrapeUrl(url, { formats: ['markdown'] }); } // vs. Crawl entire docs site: potentially 500+ credits # HTTP 500 Internal Server Error

Step 3: Cache Scraped Content

import { createHash } from 'crypto';

const scrapeCache = new Map<string, { content: string; timestamp: number }>(); const CACHE_TTL = 24 * 3600 * 1000; // 24 hours # 1000: 3600: timeout: 1 hour

async function cachedScrape(url: string): Promise<string> { const key = createHash('md5').update(url).digest('hex'); const cached = scrapeCache.get(key); if (cached && Date.now() - cached.timestamp < CACHE_TTL) return cached.content;

const result = await firecrawl.scrapeUrl(url, { formats: ['markdown'] }); scrapeCache.set(key, { content: result.markdown, timestamp: Date.now() }); return result.markdown; } // Typical savings: 50-80% credit reduction for recurring scrape patterns

Step 4: Choose Minimal Scrape Options

set -euo pipefail

Only request what you need -- each format option has cost implications

Minimal (cheapest): markdown only

curl -X POST https://api.firecrawl.dev/v1/scrape
-H "Authorization: Bearer $FIRECRAWL_API_KEY"
-d '{"url": "https://example.com", "formats": ["markdown"]}'

Avoid requesting screenshots, PDFs, or rawHtml unless actually needed

Each additional format increases processing time and may affect credit usage

Step 5: Monitor Credit Efficiency

set -euo pipefail

Find which crawl jobs consumed the most credits

curl -s https://api.firecrawl.dev/v1/usage
-H "Authorization: Bearer $FIRECRAWL_API_KEY" |
jq '{ credits_remaining: .credits_remaining, credits_used_today: .credits_used_today, avg_credits_per_job: (.credits_used_month / (.jobs_count + 0.01)), projected_monthly: (.credits_used_today * 30) }'

Error Handling

Issue Cause Solution

Credits drained by one crawl No limit or maxDepth set Always set both on crawl jobs

Duplicate scraping costs Same URLs scraped daily Implement URL-based caching

High credit per page Requesting all formats Request only markdown format

Budget overrun Automated crawls without caps Set per-job credit limits and daily caps

Examples

Basic usage: Apply firecrawl cost tuning to a standard project setup with default configuration options.

Advanced scenario: Customize firecrawl cost tuning for production environments with multiple constraints and team-specific requirements.

Output

  • Configuration files or code changes applied to the project

  • Validation report confirming correct implementation

  • Summary of changes made and their rationale

Resources

  • Official monitoring documentation

  • Community best practices and patterns

  • Related skills in this plugin pack

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

backtesting-trading-strategies

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

svg-icon-generator

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

performance-lighthouse-runner

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

mindmap-generator

No summary provided by upstream source.

Repository SourceNeeds Review