find-swallowed-exceptions
Scan Python source files for swallowed-exception patterns that silently turn errors into fake successes. Catches bare `except` blocks that pass / return None / return mock objects, log-and-fake-success handlers, and mock-substitution-on-error. AST-based — not just regex. Use before any deploy of new agent code, on the working directory after a bug fix, or routinely on production-path Python files.
Repository SourceNeeds Review
vet-skill
Vet a third-party Claude/Cursor/agent skill (or plugin / extension package) BEFORE installing it. Catches malicious payloads — prompt injection patterns, hardcoded webhook exfiltration, encoded payloads, dynamic execution, suspicious dependencies, typosquatted package names. Returns ALLOW/WARN/BLOCK with rule citation. Use when the user is about to install a community skill, when reviewing a PR that adds a third-party plugin, or after seeing a "this skill 10x'd my agent" tweet that looks too good.
Repository SourceNeeds Review
cost-overview
Show current production AI cost overview — totals, top spenders by agent, per-provider breakdown, anomaly detection, and time-to-429 rate-limit prediction. Use when the user asks "what's my AI bill", "which agent is burning tokens", "am I about to hit the rate limit", or wants a cost dashboard. Cross-provider — works for Anthropic, OpenAI, Gemini, Bedrock, Ollama.
Repository SourceNeeds Review
health-check
Show current AI deployment health overview — gateway status, plugin/skill registry, recent errors, CPU/RAM pressure, OOM history, cron status, disk pressure, upgrade outcome. The "vital signs" panel for a production AI deployment. Use when the user asks "is my deployment healthy", "what's wrong with my agent", "are services up", or wants an at-a-glance status check.
Repository SourceNeeds Review