instrumentation

Instrument with Logfire

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "instrumentation" with this command: npx skills add pydantic/skills/pydantic-skills-instrumentation

Instrument with Logfire

When to Use This Skill

Invoke this skill when:

  • User asks to "add logfire", "add observability", "add tracing", or "add monitoring"

  • User wants to instrument an app with structured logging or tracing (Python, JS/TS, or Rust)

  • User mentions Logfire in any context

  • User asks to "add logging" or "see what my app is doing"

  • User wants to monitor AI/LLM calls (PydanticAI, OpenAI, Anthropic)

  • User asks to add observability to an AI agent or LLM pipeline

How Logfire Works

Logfire is an observability platform built on OpenTelemetry. It captures traces, logs, and metrics from applications. Logfire has native SDKs for Python, JavaScript/TypeScript, and Rust, plus support for any language via OpenTelemetry.

The reason this skill exists is that Claude tends to get a few things subtly wrong with Logfire - especially the ordering of configure() vs instrument_*() calls, the structured logging syntax, and which extras to install. These matter because a misconfigured setup silently drops traces.

Step 1: Detect Language and Frameworks

Identify the project language and instrumentable libraries:

  • Python: Read pyproject.toml or requirements.txt . Common instrumentable libraries: FastAPI, httpx, asyncpg, SQLAlchemy, psycopg, Redis, Celery, Django, Flask, requests, PydanticAI.

  • JavaScript/TypeScript: Read package.json . Common frameworks: Express, Next.js, Fastify. Also check for Cloudflare Workers or Deno.

  • Rust: Read Cargo.toml .

Then follow the language-specific steps below.

Python

Install with Extras

Install logfire with extras matching the detected frameworks. Each instrumented library needs its corresponding extra - without it, the instrument_*() call will fail at runtime with a missing dependency error.

uv add 'logfire[fastapi,httpx,asyncpg]'

The full list of available extras: fastapi , starlette , django , flask , httpx , requests , asyncpg , psycopg , psycopg2 , sqlalchemy , redis , pymongo , mysql , sqlite3 , celery , aiohttp , aws-lambda , system-metrics , litellm , dspy , google-genai .

Configure and Instrument

This is where ordering matters. logfire.configure() initializes the SDK and must come before everything else. The instrument_() calls register hooks into each library. If you call instrument_() before configure() , the hooks register but traces go nowhere.

import logfire

1. Configure first - always

logfire.configure()

2. Instrument libraries - after configure, before app starts

logfire.instrument_fastapi(app) logfire.instrument_httpx() logfire.instrument_asyncpg()

Placement rules:

  • logfire.configure() goes in the application entry point (main.py , or the module that creates the app)

  • Call it once per process - not inside request handlers, not in library code

  • instrument_*() calls go right after configure()

  • Web framework instrumentors (instrument_fastapi , instrument_flask , instrument_django ) need the app instance as an argument. HTTP client and database instrumentors (instrument_httpx , instrument_asyncpg ) are global and take no arguments.

  • In Gunicorn deployments, call logfire.configure() inside the post_fork hook, not at module level - each worker is a separate process

Structured Logging

Replace print() and logging.*() calls with Logfire's structured logging. The key pattern: use {key} placeholders with keyword arguments, never f-strings.

Correct - each {key} becomes a searchable attribute in the Logfire UI

logfire.info("Created user {user_id}", user_id=uid) logfire.error("Payment failed {amount} {currency}", amount=100, currency="USD")

Wrong - creates a flat string, nothing is searchable

logfire.info(f"Created user {uid}")

For grouping related operations and measuring duration, use spans:

with logfire.span("Processing order {order_id}", order_id=order_id): items = await fetch_items(order_id) total = calculate_total(items) logfire.info("Calculated total {total}", total=total)

For exceptions, use logfire.exception() which automatically captures the traceback:

try: await process_order(order_id) except Exception: logfire.exception("Failed to process order {order_id}", order_id=order_id) raise

AI/LLM Instrumentation (Python)

Logfire auto-instruments AI libraries to capture LLM calls, token usage, tool invocations, and agent runs.

uv add 'logfire[pydantic-ai]'

or: uv add 'logfire[openai]' / uv add 'logfire[anthropic]'

Available AI extras: pydantic-ai , openai , anthropic , litellm , dspy , google-genai .

logfire.configure() logfire.instrument_pydantic_ai() # captures agent runs, tool calls, LLM request/response

or:

logfire.instrument_openai() # captures chat completions, embeddings, token counts logfire.instrument_anthropic() # captures messages, token usage

For PydanticAI, each agent run becomes a parent span containing child spans for every tool call and LLM request.

JavaScript / TypeScript

Install

Node.js

npm install @pydantic/logfire-node

Cloudflare Workers

npm install @pydantic/logfire-cf-workers logfire

Next.js / generic

npm install logfire

Configure

Node.js (Express, Fastify, etc.) - create an instrumentation.ts loaded before your app:

import * as logfire from '@pydantic/logfire-node' logfire.configure()

Launch with: node --require ./instrumentation.js app.js

The SDK auto-instruments common libraries when loaded before the app. Set LOGFIRE_TOKEN in your environment or pass token to configure() .

Cloudflare Workers - wrap your handler with instrument() :

import { instrument } from '@pydantic/logfire-cf-workers'

export default instrument(handler, { service: { name: 'my-worker', version: '1.0.0' } })

Next.js - set environment variables for OpenTelemetry export:

OTEL_EXPORTER_OTLP_TRACES_ENDPOINT=https://logfire-api.pydantic.dev/v1/traces OTEL_EXPORTER_OTLP_HEADERS=Authorization=<your-write-token>

Structured Logging (JS/TS)

// Structured attributes as second argument logfire.info('Created user', { user_id: uid }) logfire.error('Payment failed', { amount: 100, currency: 'USD' })

// Spans logfire.span('Processing order', { order_id }, {}, async () => { logfire.info('Processing step completed') })

// Error reporting logfire.reportError('order processing', error)

Log levels: trace , debug , info , notice , warn , error , fatal .

Rust

Install

[dependencies] logfire = "0.6"

Configure

let shutdown_handler = logfire::configure() .install_panic_handler() .finish()?;

Set LOGFIRE_TOKEN in your environment or use the Logfire CLI to select a project.

Structured Logging (Rust)

The Rust SDK is built on tracing and opentelemetry

  • existing tracing macros work automatically.

// Spans logfire::span!("processing order", order_id = order_id).in_scope(|| { // traced code });

// Events logfire::info!("Created user {user_id}", user_id = uid);

Always call shutdown_handler.shutdown() before program exit to flush data.

Verify

After instrumentation, verify the setup works:

  • Run logfire auth to check authentication (or set LOGFIRE_TOKEN )

  • Start the app and trigger a request

  • Check https://logfire.pydantic.dev/ for traces

If traces aren't appearing: check that configure() is called before instrument_*() (Python), check that LOGFIRE_TOKEN is set, and check that the correct packages/extras are installed.

References

Detailed patterns and integration tables, organized by language:

  • Python: ${CLAUDE_PLUGIN_ROOT}/skills/instrumentation/references/python/logging-patterns.md (log levels, spans, stdlib integration, metrics, capfire testing) and ${CLAUDE_PLUGIN_ROOT}/skills/instrumentation/references/python/integrations.md (full instrumentor table with extras)

  • JavaScript/TypeScript: ${CLAUDE_PLUGIN_ROOT}/skills/instrumentation/references/javascript/patterns.md (log levels, spans, error handling, config) and ${CLAUDE_PLUGIN_ROOT}/skills/instrumentation/references/javascript/frameworks.md (Node.js, Cloudflare Workers, Next.js, Deno setup)

  • Rust: ${CLAUDE_PLUGIN_ROOT}/skills/instrumentation/references/rust/patterns.md (macros, spans, tracing/log crate integration, async, shutdown)

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

building-pydantic-ai-agents

No summary provided by upstream source.

Repository SourceNeeds Review
General

pdf

Use this skill whenever the user wants to do anything with PDF files. This includes reading or extracting text/tables from PDFs, combining or merging multiple PDFs into one, splitting PDFs apart, rotating pages, adding watermarks, creating new PDFs, filling PDF forms, encrypting/decrypting PDFs, extracting images, and OCR on scanned PDFs to make them searchable. If the user mentions a .pdf file or asks to produce one, use this skill.

Repository Source
74.6K117.4Kanthropics
General

docx

Use this skill whenever the user wants to create, read, edit, or manipulate Word documents (.docx files). Triggers include: any mention of 'Word doc', 'word document', '.docx', or requests to produce professional documents with formatting like tables of contents, headings, page numbers, or letterheads. Also use when extracting or reorganizing content from .docx files, inserting or replacing images in documents, performing find-and-replace in Word files, working with tracked changes or comments, or converting content into a polished Word document. If the user asks for a 'report', 'memo', 'letter', 'template', or similar deliverable as a Word or .docx file, use this skill. Do NOT use for PDFs, spreadsheets, Google Docs, or general coding tasks unrelated to document generation.

Repository Source
60K117.4Kanthropics
General

openclaw-secure-linux-cloud

Use when self-hosting OpenClaw on a cloud server, hardening a remote OpenClaw gateway, choosing between SSH tunneling, Tailscale, or reverse-proxy exposure, or reviewing Podman, pairing, sandboxing, token auth, and tool-permission defaults for a secure personal deployment.

Repository Source
56.3K34xixu-me