Sentry Reliability Patterns
Prerequisites
-
Understanding of failure modes
-
Fallback logging strategy
-
Network reliability characteristics
-
Graceful shutdown requirements
Instructions
-
Wrap Sentry.init with try/catch for graceful initialization failure handling
-
Implement fallback capture function that logs locally if Sentry fails
-
Add retry with exponential backoff for network failures
-
Implement offline event queue for intermittent connectivity
-
Add circuit breaker pattern to skip Sentry after repeated failures
-
Configure request timeout with wrapper function
-
Implement graceful shutdown with Sentry.close() on SIGTERM
-
Set up dual-write pattern to multiple error trackers for redundancy
-
Create health check endpoint to verify Sentry connectivity
-
Test all failure scenarios to ensure application continues operating
Output
-
Graceful degradation implemented
-
Circuit breaker pattern applied
-
Offline queue configured
-
Dual-write reliability enabled
Error Handling
See ${CLAUDE_SKILL_DIR}/references/errors.md for comprehensive error handling.
Examples
See ${CLAUDE_SKILL_DIR}/references/examples.md for detailed examples.
Resources
-
Sentry Configuration
-
Transport Options
Overview
Build reliable Sentry integrations.