Author Profile: temurkhan13

Skills published by temurkhan13 with real stars/downloads and source-aware metadata.

Total Skills

11

Total Stars

0

Total Downloads

0

RSS Feed

Skills Performance

Comparison chart based on real stars and downloads signals from source data.

cost-overview

0

Stars
0
Downloads
0

find-swallowed-exceptions

0

Stars
0
Downloads
0

health-check

0

Stars
0
Downloads
0

operator-cheatsheet

0

Stars
0
Downloads
0

production-audit

0

Stars
0
Downloads
0

should-i-upgrade

0

Stars
0
Downloads
0

silent-failures

0

Stars
0
Downloads
0

verify-claim

0

Stars
0
Downloads
0

Published Skills

Automation

cost-overview

Show current production AI cost overview — totals, top spenders by agent, per-provider breakdown, anomaly detection, and time-to-429 rate-limit prediction. Use when the user asks "what's my AI bill", "which agent is burning tokens", "am I about to hit the rate limit", or wants a cost dashboard. Cross-provider — works for Anthropic, OpenAI, Gemini, Bedrock, Ollama.

Repository SourceNeeds Review
Coding

find-swallowed-exceptions

Scan Python source files for swallowed-exception patterns that silently turn errors into fake successes. Catches bare `except` blocks that pass / return None / return mock objects, log-and-fake-success handlers, and mock-substitution-on-error. AST-based — not just regex. Use before any deploy of new agent code, on the working directory after a bug fix, or routinely on production-path Python files.

Repository SourceNeeds Review
Automation

health-check

Show current AI deployment health overview — gateway status, plugin/skill registry, recent errors, CPU/RAM pressure, OOM history, cron status, disk pressure, upgrade outcome. The "vital signs" panel for a production AI deployment. Use when the user asks "is my deployment healthy", "what's wrong with my agent", "are services up", or wants an at-a-glance status check.

Repository SourceNeeds Review
General

operator-cheatsheet

One-page operator cheatsheet for the Aufgaard plugin. The "what to check, when, why" reference. Auto-loaded so day-to-day routing is fast. Use as the lightweight summary when the user wants a quick recap of capabilities.

Repository SourceNeeds Review
Security

production-audit

Comprehensive production-AI deployment audit against the 35-pattern catalogue. Calls all 7 MCP servers in parallel — bash-vet, skill-vetter, cost-tracker, silentwatch, health-mcp, upgrade-orch, output-vetter — and synthesizes a one-page report with critical findings, audit score, and remediation recommendations. Use when the user asks "is my AI deployment healthy?", "audit my production AI setup", "run a production audit", or wants an end-to-end review of their AI deployment.

Repository SourceNeeds Review
General

should-i-upgrade

Check whether upgrading a package / runtime / model version is safe. Looks up the user-driven regression catalogue (8+ entries from real field reports) AND runs provider-side regression detection (catches Anthropic-April-23-style silent reasoning-effort downgrades). Returns a recommended upgrade path with mitigations. Use before any significant package upgrade, model-version bump, or runtime change.

Repository SourceNeeds Review
General

silent-failures

Show recent silent-failure detections from cron / scheduled jobs. Catches the textbook patterns — exit-0 with empty stdout, length anomalies (output dramatically shorter than baseline), retry storms, action-budget leaks. Use when the user asks "is anything silently broken", "did Friday's cron actually run", "are my scheduled jobs working", or after a downstream consumer reports stale data.

Repository SourceNeeds Review
Coding

verify-claim

Verify whether an agent's stated outcome ("I committed and pushed", "tests pass", "I cleaned up the temp dir", "deployment succeeded") matches actual filesystem / git / test state. Catches the chiefofautism failure mode (agent confidently misreports what it did) AND the Codex sandbox-escalation case (agent acknowledges read-only constraint then violates it). Use when you suspect an agent's completion claim doesn't match reality, or as a routine post-action check on any state-modifying tool call.

Repository SourceNeeds Review
Web3

vet-bash

Vet a shell command for production safety BEFORE running it. Catches destructive patterns — rm -rf with unset vars, glob wipeouts, dd/mkfs filesystem destruction, base64-pipe-shell exfil obfuscation, chmod 777 / privilege escalation, force-push, reset --hard. Returns ALLOW/WARN/BLOCK with rule citation. Use when the user pastes a command they're unsure about, OR when reviewing a chain of commands an agent just emitted, OR before approving any shell action with destructive verbs.

Repository SourceNeeds Review
Automation

vet-config

Vet an agent-config file or directory (CLAUDE.md, AGENTS.md, .cursor/rules.md, .gemini/config, .claude/skills/, .git/hooks/) BEFORE the agent reads it on next session-start. Catches the agent-config-trust-boundary attack class — adversary lands a config file in a PR, agent inherits the override, RCE-equivalent. 24+ rules including PROMPT_INJ, EXFIL, DYNAMIC_EXEC, SECRET_REF, GIT_HOOK_INSTALL. Use when reviewing PRs that touch any agent-config layer, or after pulling a branch that may have modified these files.

Repository SourceNeeds Review
Coding

vet-skill

Vet a third-party Claude/Cursor/agent skill (or plugin / extension package) BEFORE installing it. Catches malicious payloads — prompt injection patterns, hardcoded webhook exfiltration, encoded payloads, dynamic execution, suspicious dependencies, typosquatted package names. Returns ALLOW/WARN/BLOCK with rule citation. Use when the user is about to install a community skill, when reviewing a PR that adds a third-party plugin, or after seeing a "this skill 10x'd my agent" tweet that looks too good.

Repository SourceNeeds Review
Author temurkhan13 | V50.AI