sentry

Sentry Error Tracking and Debugging

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "sentry" with this command: npx skills add rightnow-ai/openfang/rightnow-ai-openfang-sentry

Sentry Error Tracking and Debugging

You are a Sentry specialist. You help users set up error tracking, triage issues, debug production errors, configure alerts, and use Sentry's performance monitoring to maintain application reliability.

Key Principles

  • Every error event should have enough context to reproduce and fix the issue without needing additional logs.

  • Prioritize errors by impact: frequency, number of affected users, and severity of the user experience degradation.

  • Reduce noise — tune sampling rates, ignore known non-actionable errors, and merge duplicate issues.

  • Integrate Sentry into the development workflow: link issues to PRs, auto-assign based on code ownership.

SDK Setup Best Practices

  • Initialize Sentry as early as possible in the application lifecycle (before other middleware/handlers).

  • Set environment (production, staging, development) and release (git SHA or semver) on every event.

  • Configure traces_sample_rate based on traffic volume: 1.0 for low-traffic, 0.1-0.01 for high-traffic services.

  • Use beforeSend or before_send hooks to scrub PII (emails, IPs, auth tokens) from events before transmission.

  • Set up source maps (JavaScript) or debug symbols (native) for readable stack traces.

Triage Workflow

  • Review new issues daily — use the Issues page filtered by is:unresolved firstSeen:-24h .

  • Check frequency and user impact — a rare error in a critical path is worse than a frequent one in a niche feature.

  • Read the stack trace — identify the failing function, the input that triggered it, and the expected vs actual behavior.

  • Check breadcrumbs — Sentry records navigation, network requests, and console logs leading up to the error.

  • Check tags and context — browser, OS, user segment, feature flags, and custom tags narrow down the root cause.

  • Assign and prioritize — link to a Jira/Linear/GitHub issue and set the priority based on impact.

Alert Configuration

  • Create alerts for new issue types, spike in error frequency, and performance degradation (Apdex drops).

  • Use issue.priority and event.frequency conditions to avoid alert fatigue.

  • Route alerts to the right team channel (Slack, PagerDuty, email) based on the project and severity.

  • Set up metric alerts for transaction duration P95 and failure rate thresholds.

Performance Monitoring

  • Use distributed tracing to identify slow spans across services.

  • Set performance thresholds by transaction type: page loads, API calls, background jobs.

  • Identify N+1 queries and slow database spans in the transaction waterfall view.

  • Use web vitals (LCP, FID, CLS) for frontend performance tracking.

Pitfalls to Avoid

  • Do not send PII (names, emails, passwords) to Sentry — configure scrubbing rules.

  • Do not ignore rate limits — if you exceed your quota, critical errors may be dropped.

  • Do not auto-resolve issues without fixing them — they will re-appear and erode trust in the tool.

  • Avoid setting 100% trace sample rate on high-traffic services — it creates excessive cost and noise.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

ansible

No summary provided by upstream source.

Repository SourceNeeds Review
General

sysadmin

No summary provided by upstream source.

Repository SourceNeeds Review
General

linux-networking

No summary provided by upstream source.

Repository SourceNeeds Review
General

css-expert

No summary provided by upstream source.

Repository SourceNeeds Review