observability

Vercel Observability expert guidance — Drains (logs, traces, speed insights, web analytics), Web Analytics, Speed Insights, runtime logs, custom events, OpenTelemetry integration, and monitoring dashboards. Use when instrumenting, debugging, or optimizing application performance and user experience on Vercel.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "observability" with this command: npx skills add vercel-labs/vercel-plugin/vercel-labs-vercel-plugin-observability

Vercel Observability

You are an expert in Vercel's observability stack — runtime logs, structured logging, Drains, Web Analytics, Speed Insights, and monitoring integrations. Always start with logging. When something is stuck, slow, or broken, the first step is always to check or add logs.

Structured Logging Baseline

Add this to every API route and server action as a minimum. If the user reports something stuck, hanging, or slow, verify this baseline exists first:

const start = Date.now();
console.log(JSON.stringify({ level: "info", msg: "start", route: "/api/example", requestId: req.headers.get("x-vercel-id") }));
// ... your logic ...
console.log(JSON.stringify({ level: "info", msg: "done", route: "/api/example", ms: Date.now() - start }));
// On error:
console.error(JSON.stringify({ level: "error", msg: "failed", route: "/api/example", error: err.message, ms: Date.now() - start }));

Runtime Logs

Vercel provides real-time logs for all function invocations.

Structured Logging

// app/api/process/route.ts
export async function POST(req: Request) {
  const start = Date.now()
  const data = await req.json()

  // Structured logs appear in Vercel's log viewer
  console.log(JSON.stringify({
    level: 'info',
    message: 'Processing request',
    requestId: req.headers.get('x-vercel-id'),
    payload_size: JSON.stringify(data).length,
  }))

  try {
    const result = await processData(data)
    console.log(JSON.stringify({
      level: 'info',
      message: 'Request completed',
      duration_ms: Date.now() - start,
    }))
    return Response.json(result)
  } catch (error) {
    console.error(JSON.stringify({
      level: 'error',
      message: 'Processing failed',
      error: error instanceof Error ? error.message : String(error),
      duration_ms: Date.now() - start,
    }))
    return Response.json({ error: 'Internal error' }, { status: 500 })
  }
}

Next.js Instrumentation

// instrumentation.ts (Next.js 16)
export async function register() {
  if (process.env.NEXT_RUNTIME === 'nodejs') {
    // Initialize monitoring on server startup
    const { initMonitoring } = await import('./lib/monitoring')
    initMonitoring()
  }
}

Runtime Logs via REST API

Query deployment runtime logs programmatically. The endpoint returns application/stream+json — a streaming response where each line is a separate JSON object.

# Stream runtime logs for a deployment (returns application/stream+json)
curl -N -H "Authorization: Bearer $VERCEL_TOKEN" \
  "https://api.vercel.com/v3/deployments/<deployment-id>/events" \
  --max-time 120

Streaming guidance: The response is unbounded — always set a timeout (--max-time in curl, AbortController with setTimeout in fetch). Parse line-by-line as NDJSON. Each line contains { timestamp, text, level, source }.

// Programmatic streaming with timeout
const controller = new AbortController()
const timeout = setTimeout(() => controller.abort(), 60_000) // 60s max

const res = await fetch(
  `https://api.vercel.com/v3/deployments/${deploymentId}/events`,
  {
    headers: { Authorization: `Bearer ${process.env.VERCEL_TOKEN}` },
    signal: controller.signal,
  }
)

const reader = res.body!.getReader()
const decoder = new TextDecoder()
let buffer = ''

try {
  while (true) {
    const { done, value } = await reader.read()
    if (done) break
    buffer += decoder.decode(value, { stream: true })
    const lines = buffer.split('\n')
    buffer = lines.pop()! // keep incomplete line in buffer
    for (const line of lines) {
      if (!line.trim()) continue
      const event = JSON.parse(line)
      console.log(`[${event.level}] ${event.text}`)
    }
  }
} finally {
  clearTimeout(timeout)
}

MCP alternative: Use get_runtime_logs via the Vercel MCP server for agent-friendly log queries without managing streams directly. See ⤳ skill: vercel-api.

Web Analytics

Privacy-friendly, first-party analytics with no cookie banners required.

Installation

npm install @vercel/analytics

Setup (Next.js App Router)

// app/layout.tsx
import { Analytics } from '@vercel/analytics/next'

export default function RootLayout({ children }: { children: React.ReactNode }) {
  return (
    <html>
      <body>
        {children}
        <Analytics />
      </body>
    </html>
  )
}

Custom Events (Pro/Enterprise)

Track business-specific events beyond pageviews.

import { track } from '@vercel/analytics'

// Track a conversion
track('purchase', {
  product: 'pro-plan',
  value: 20,
  currency: 'USD',
})

// Track a feature usage
track('feature_used', {
  name: 'ai-chat',
  duration_ms: 3200,
})

Server-Side Tracking

import { track } from '@vercel/analytics/server'

export async function POST(req: Request) {
  const data = await req.json()
  await processOrder(data)

  track('order_completed', {
    order_id: data.id,
    total: data.total,
  })

  return Response.json({ success: true })
}

Speed Insights

Real-user performance monitoring built on Core Web Vitals.

Installation

npm install @vercel/speed-insights

Setup (Next.js App Router)

// app/layout.tsx
import { SpeedInsights } from '@vercel/speed-insights/next'

export default function RootLayout({ children }: { children: React.ReactNode }) {
  return (
    <html>
      <body>
        {children}
        <SpeedInsights />
      </body>
    </html>
  )
}

Metrics Tracked

MetricWhat It MeasuresGood Threshold
LCPLargest Contentful Paint< 2.5s
INPInteraction to Next Paint< 200ms
CLSCumulative Layout Shift< 0.1
FCPFirst Contentful Paint< 1.8s
TTFBTime to First Byte< 800ms

Performance Attribution

Speed Insights attributes metrics to specific routes and pages, letting you identify which pages are slow and why.

Drains

Drains forward observability data from Vercel to external endpoints. They are the primary mechanism for exporting logs, traces, Speed Insights, and Web Analytics data to third-party platforms.

Plan requirement: Drains require a Pro or Enterprise plan. For Hobby plans, see the Fallback Guidance section below.

Data Types

Drains can forward multiple categories of telemetry:

Data TypeWhat It ContainsUse Case
LogsRuntime function logs, build logs, static access logsCentralized log aggregation
TracesOpenTelemetry-compatible distributed tracesEnd-to-end request tracing
Speed InsightsCore Web Vitals and performance metricsPerformance monitoring pipelines
Web AnalyticsPageviews, custom events, visitor dataAnalytics data warehousing

Supported Formats

FormatProtocolBest For
JSONHTTPS POSTCustom backends, generic log collectors
NDJSONHTTPS POSTStreaming-friendly consumers, high-volume pipelines
SyslogTLS syslogTraditional log management (rsyslog, syslog-ng)

Setting Up Drains

Drains are configured via the Vercel Dashboard at https://vercel.com/dashboard/{team}/~/settings/log-drains or the REST API.

Via Dashboard

  1. Open https://vercel.com/dashboard/{team}/~/settings/log-drains (replace {team} with your team slug)
  2. Click Add Log Drain
  3. Select the drain type (JSON, NDJSON, or syslog) and enter the endpoint URL
  4. Choose which environments and sources to include
  5. Click Create to activate the drain

Via REST API (/v1/drains)

# List all drains
curl -s -H "Authorization: Bearer $VERCEL_TOKEN" \
  "https://api.vercel.com/v1/drains?teamId=$TEAM_ID" | jq

# Create a JSON drain
curl -X POST -H "Authorization: Bearer $VERCEL_TOKEN" \
  -H "Content-Type: application/json" \
  "https://api.vercel.com/v1/drains?teamId=$TEAM_ID" \
  -d '{
    "url": "https://your-endpoint.example.com/logs",
    "type": "json",
    "sources": ["lambda", "edge", "static"],
    "environments": ["production"]
  }'

# Test a drain (sends a test payload to your endpoint)
curl -X POST -H "Authorization: Bearer $VERCEL_TOKEN" \
  "https://api.vercel.com/v1/drains/<drain-id>/test?teamId=$TEAM_ID"

# Update a drain (change URL, sources, or environments)
curl -X PATCH -H "Authorization: Bearer $VERCEL_TOKEN" \
  -H "Content-Type: application/json" \
  "https://api.vercel.com/v1/drains/<drain-id>?teamId=$TEAM_ID" \
  -d '{
    "url": "https://new-endpoint.example.com/logs",
    "environments": ["production", "preview"]
  }'

# Delete a drain
curl -X DELETE -H "Authorization: Bearer $VERCEL_TOKEN" \
  "https://api.vercel.com/v1/drains/<drain-id>?teamId=$TEAM_ID"

Web Analytics Drains Reference

When a drain is configured to receive Web Analytics data, payloads arrive as batched events. The format depends on your drain type.

JSON Payload Schema

[
  {
    "type": "pageview",
    "url": "https://example.com/blog/post-1",
    "referrer": "https://google.com",
    "timestamp": 1709568000000,
    "geo": { "country": "US", "region": "CA", "city": "San Francisco" },
    "device": { "os": "macOS", "browser": "Chrome", "isBot": false },
    "projectId": "prj_xxxxx",
    "environment": "production"
  },
  {
    "type": "custom_event",
    "name": "purchase",
    "url": "https://example.com/checkout",
    "properties": { "product": "pro-plan", "value": 20 },
    "timestamp": 1709568100000,
    "geo": { "country": "US" },
    "device": { "os": "macOS", "browser": "Chrome", "isBot": false },
    "projectId": "prj_xxxxx",
    "environment": "production"
  }
]

NDJSON Payload Format

Each line is a separate JSON object (one event per line):

{"type":"pageview","url":"https://example.com/","timestamp":1709568000000,"geo":{"country":"US"},"device":{"browser":"Chrome"},...}
{"type":"pageview","url":"https://example.com/about","timestamp":1709568001000,"geo":{"country":"DE"},"device":{"browser":"Firefox"},...}
{"type":"custom_event","name":"signup","url":"https://example.com/register","timestamp":1709568002000,...}

Ingestion tip: For NDJSON, process line-by-line as events arrive. This format is preferred for high-volume pipelines where batch parsing overhead matters.

Security: Signature Verification

Vercel signs every drain payload with an HMAC-SHA1 signature in the x-vercel-signature header. Always verify signatures in production to prevent spoofed data.

Critical: You must verify against the raw request body (not a parsed/re-serialized version). JSON parsing and re-stringifying can change key order or whitespace, breaking the signature match.

import { createHmac, timingSafeEqual } from 'crypto'

function verifyDrainSignature(rawBody: string, signature: string, secret: string): boolean {
  const expected = createHmac('sha1', secret).update(rawBody).digest('hex')
  // Use timing-safe comparison to prevent timing attacks
  if (expected.length !== signature.length) return false
  return timingSafeEqual(Buffer.from(expected), Buffer.from(signature))
}

Usage in a drain endpoint:

// app/api/drain/route.ts
export async function POST(req: Request) {
  const rawBody = await req.text()
  const signature = req.headers.get('x-vercel-signature')
  const secret = process.env.DRAIN_SECRET!

  if (!signature || !verifyDrainSignature(rawBody, signature, secret)) {
    return new Response('Unauthorized', { status: 401 })
  }

  const events = JSON.parse(rawBody)
  // Process verified events...
  return new Response('OK', { status: 200 })
}

Secret management: The drain signing secret is shown once when you create the drain. Store it in an environment variable (e.g., DRAIN_SECRET). If lost, delete and recreate the drain.

OpenTelemetry Integration

Vercel exports traces in OpenTelemetry-compatible format via Drains. Configure an OTel-compatible drain endpoint at https://vercel.com/dashboard/{team}/~/settings/log-drainsAdd Log Drain → select OTLP format, or via the REST API.

Vendor Integrations

# Install via Marketplace (recommended — auto-configures drain)
vercel integration add datadog

Or manually create a drain at https://vercel.com/dashboard/{team}/~/settings/log-drainsAdd Log Drain, or via REST API, pointing to:

VendorEndpointAuth Header
Datadoghttps://http-intake.logs.datadoghq.com/api/v2/logsDD-API-KEY
Honeycombhttps://api.honeycomb.io/1/batch/<dataset>X-Honeycomb-Team

Fallback Guidance (No Drains)

If drains are unavailable (Hobby plan or not yet configured), use these alternatives:

NeedAlternativeHow
View runtime logsVercel Dashboardhttps://vercel.com/{team}/{project}/deployments → select deployment → Logs tab
Stream logs from terminalVercel CLIvercel logs <deployment-url> --follow (see ⤳ skill: vercel-cli)
Query logs programmaticallyMCP / REST APIget_runtime_logs tool or /v3/deployments/:id/events (see ⤳ skill: vercel-api)
Monitor errors post-deployCLIvercel logs <url> --level error --since 1h
Web Analytics dataDashboard onlyhttps://vercel.com/{team}/{project}/analytics
Performance metricsDashboard onlyhttps://vercel.com/{team}/{project}/speed-insights

Upgrade path: When ready for centralized observability, upgrade to Pro and configure drains at https://vercel.com/dashboard/{team}/~/settings/log-drains or via REST API. The drain setup is typically < 5 minutes.

Deploy Preflight Observability

Before promoting to production, verify observability readiness:

  • Drains check: Query configured drains via MCP list_drains or REST API. If no drains are configured on a Pro/Enterprise plan, warn:

    ⚠️ No drains configured. Production errors won't be forwarded to external monitoring. Configure drains via Dashboard or REST API before promoting. See ⤳ skill: observability.

  • Errored drains: If any drain is in error state, warn and suggest remediation before deploying:

    ⚠️ Drain "<url>" is errored. Fix or recreate before production deploy to avoid monitoring gaps.

  • Error monitoring: Check that at least one of these is in place: configured drains, an error tracking integration (e.g., Sentry, Datadog via vercel integration ls), or @vercel/analytics in the project.
  • These are warnings, not blockers — the user may proceed after acknowledgment.

Post-Deploy Error Scan

For production deployments, wait 60 seconds after READY state, then scan for early runtime errors:

vercel logs <deployment-url> --level error --since 1h

Or via MCP if available: use get_runtime_logs with level filter error.

Interpret results:

FindingAction
No errors✓ Clean deploy — no runtime errors in first hour
Errors detectedList error count and first 5 unique error messages. Suggest: check drain payloads for correlated traces, review function logs in Dashboard
500 status codes in logsCorrelate timestamps with drain data (if configured) or vercel logs <url> --json for structured output. Flag for immediate investigation
Timeout errorsCheck function duration limits in vercel.json or project settings. Consider increasing maxDuration

Fallback (no drains):

If no drains are configured, the error scan relies on CLI and Dashboard:

# Stream live errors
vercel logs <deployment-url> --level error --follow

# JSON output for parsing
vercel logs <deployment-url> --level error --since 1h --json

For richer post-deploy monitoring, configure drains to forward logs/traces to an external platform. See ⤳ skill: observability.

Performance Audit Checklist

Run through this when asked to optimize a Vercel application:

  1. Measure first: Check Speed Insights dashboard for real-user CWV data
  2. Identify LCP element: Use Chrome DevTools → Performance → identify the LCP element
  3. Audit 'use client': Every 'use client' file ships JS to the browser — minimize
  4. Check images: All above-fold images use next/image with priority
  5. Check fonts: All fonts loaded via next/font (zero CLS)
  6. Check third-party scripts: All use next/script with correct strategy
  7. Check data fetching: Server Components fetch in parallel, no waterfalls
  8. Check caching: Cache Components used for expensive operations
  9. Check bundle: Run analyzer, look for low-hanging fruit
  10. Check infrastructure: Functions in correct region, Fluid Compute enabled

Monitoring Dashboard Patterns

Full-Stack Observability Setup

Combine all Vercel observability tools for comprehensive coverage.

// app/layout.tsx — complete observability setup
import { Analytics } from '@vercel/analytics/next'
import { SpeedInsights } from '@vercel/speed-insights/next'

export default function RootLayout({ children }: { children: React.ReactNode }) {
  return (
    <html>
      <body>
        {children}
        <Analytics />
        <SpeedInsights />
      </body>
    </html>
  )
}

Custom Monitoring with waitUntil

Fire-and-forget telemetry that doesn't block responses.

import { waitUntil } from '@vercel/functions'

export async function GET(req: Request) {
  const start = Date.now()
  const result = await fetchData()

  // Send response immediately
  const response = Response.json(result)

  // Report metrics in background
  waitUntil(async () => {
    await reportMetric('api_latency', Date.now() - start, {
      route: '/api/data',
      status: 200,
    })
  })

  return response
}

Error Tracking Pattern

// lib/error-reporting.ts
export async function reportError(error: unknown, context: Record<string, unknown>) {
  const payload = {
    message: error instanceof Error ? error.message : String(error),
    stack: error instanceof Error ? error.stack : undefined,
    timestamp: new Date().toISOString(),
    ...context,
  }

  // Log for Vercel's runtime logs
  console.error(JSON.stringify(payload))

  // Also send to external service if configured
  if (process.env.ERROR_WEBHOOK_URL) {
    await fetch(process.env.ERROR_WEBHOOK_URL, {
      method: 'POST',
      headers: { 'Content-Type': 'application/json' },
      body: JSON.stringify(payload),
    })
  }
}

Marketplace Observability Integrations

Sentry — Error & Performance Monitoring

Native Vercel Marketplace integration. Auto-configures source maps and release tracking.

npx @sentry/wizard@latest -i nextjs
# Or install manually:
npm install @sentry/nextjs

Sentry wizard creates sentry.client.config.ts, sentry.server.config.ts, and sentry.edge.config.ts. It also wraps next.config.js with withSentryConfig.

Install via Marketplace: vercel integration add sentry

Datadog — Full-Stack Monitoring

APM, logs, and Real User Monitoring (RUM). Auto-configures log drain on Marketplace install.

npm install @datadog/browser-rum
import { datadogRum } from '@datadog/browser-rum'

datadogRum.init({
  applicationId: process.env.NEXT_PUBLIC_DD_APPLICATION_ID!,
  clientToken: process.env.NEXT_PUBLIC_DD_CLIENT_TOKEN!,
  site: 'datadoghq.com',
  service: 'my-app',
  sessionSampleRate: 100,
  trackResources: true,
  trackLongTasks: true,
})

Install via Marketplace: vercel integration add datadog

Checkly — Synthetic Monitoring & Testing

API and browser checks that run continuously against your deployments.

npm install -D checkly
npx checkly init

Checkly integrates with Vercel deployment events to trigger checks on every deploy.

Install via Marketplace: vercel integration add checkly

New Relic — Application Performance Monitoring

Full-stack observability with distributed tracing and alerting.

npm install newrelic

Requires a newrelic.js config file at the project root. Install via Marketplace: vercel integration add newrelic

Decision Matrix

NeedUseWhy
Page views, traffic sourcesWeb AnalyticsFirst-party, privacy-friendly
Business event trackingWeb Analytics custom eventsTrack conversions, feature usage
Core Web Vitals monitoringSpeed InsightsReal user data per route
Function debuggingRuntime Logs (CLI vercel logs / Dashboard (https://vercel.com/{team}/{project}/logs) / REST)Real-time, per-invocation logs
Export logs to external platformDrains (JSON/NDJSON/Syslog)Centralize observability (Pro+)
Export analytics dataDrains (Web Analytics type)Warehouse pageviews + custom events (Pro+)
OpenTelemetry tracesDrains (OTel-compatible endpoint)Standards-based distributed tracing (Pro+)
Post-response telemetrywaitUntil + custom reportingNon-blocking metrics
Server-side event tracking@vercel/analytics/serverTrack API-triggered events
Hobby plan log accessCLI vercel logs + Dashboard (https://vercel.com/{team}/{project}/logs)No drains needed

Cross-References

  • Drains REST API & runtime logs endpoint⤳ skill: vercel-api (Observability APIs section)
  • CLI log streaming (--follow, --since, --level)⤳ skill: vercel-cli (Logs & Inspection section)
  • Marketplace vendor integrations⤳ skill: marketplace

Official Documentation

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

shadcn

No summary provided by upstream source.

Repository SourceNeeds Review
General

turborepo

No summary provided by upstream source.

Repository SourceNeeds Review
General

ai-elements

No summary provided by upstream source.

Repository SourceNeeds Review
General

turbopack

No summary provided by upstream source.

Repository SourceNeeds Review
observability | V50.AI