find-swallowed-exceptions

Scan Python source files for swallowed-exception patterns that silently turn errors into fake successes. Catches bare `except` blocks that pass / return None / return mock objects, log-and-fake-success handlers, and mock-substitution-on-error. AST-based — not just regex. Use before any deploy of new agent code, on the working directory after a bug fix, or routinely on production-path Python files.

Safety Notice

This listing is imported from SkillsMP metadata and should be treated as untrusted until upstream source review is completed.

Copy this and send it to your AI assistant to learn

Install skill "find-swallowed-exceptions" with this command: npx skills add temurkhan13/skillsmp-temurkhan13-temurkhan13-find-swallowed-exceptions

No markdown body

This source entry does not include full markdown content beyond metadata.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

verify-claim

Verify whether an agent's stated outcome ("I committed and pushed", "tests pass", "I cleaned up the temp dir", "deployment succeeded") matches actual filesystem / git / test state. Catches the chiefofautism failure mode (agent confidently misreports what it did) AND the Codex sandbox-escalation case (agent acknowledges read-only constraint then violates it). Use when you suspect an agent's completion claim doesn't match reality, or as a routine post-action check on any state-modifying tool call.

Repository SourceNeeds Review
Coding

vet-skill

Vet a third-party Claude/Cursor/agent skill (or plugin / extension package) BEFORE installing it. Catches malicious payloads — prompt injection patterns, hardcoded webhook exfiltration, encoded payloads, dynamic execution, suspicious dependencies, typosquatted package names. Returns ALLOW/WARN/BLOCK with rule citation. Use when the user is about to install a community skill, when reviewing a PR that adds a third-party plugin, or after seeing a "this skill 10x'd my agent" tweet that looks too good.

Repository SourceNeeds Review
Automation

cost-overview

Show current production AI cost overview — totals, top spenders by agent, per-provider breakdown, anomaly detection, and time-to-429 rate-limit prediction. Use when the user asks "what's my AI bill", "which agent is burning tokens", "am I about to hit the rate limit", or wants a cost dashboard. Cross-provider — works for Anthropic, OpenAI, Gemini, Bedrock, Ollama.

Repository SourceNeeds Review
Automation

health-check

Show current AI deployment health overview — gateway status, plugin/skill registry, recent errors, CPU/RAM pressure, OOM history, cron status, disk pressure, upgrade outcome. The "vital signs" panel for a production AI deployment. Use when the user asks "is my deployment healthy", "what's wrong with my agent", "are services up", or wants an at-a-glance status check.

Repository SourceNeeds Review
find-swallowed-exceptions | V50.AI