FANG — ENV Guard
Two-phase audit tool to detect environment variable theft in skill scripts.
Scripts
| Script | Purpose |
|---|---|
scripts/fang_audit.py | Main audit runner — static scan + LLM deep analysis |
scripts/scan_env.py | Static pattern scanner (env / network / encode / exec) |
Phase 1 — Static Scan
Uses scan_env.py regex rules across .py and .sh files.
Risk scoring:
| Flag | Points |
|---|---|
| env access | +2 |
| network call | +3 |
| base64 / encode | +2 |
| exec / subprocess | +2 |
Score ≥ 6 → HIGH · ≥ 3 → MEDIUM · > 0 → LOW · 0 → CLEAN
Phase 2 — LLM Deep Analysis (optional)
Reads all .py .sh .js .ts .ps1 .bash scripts in the target directory and sends them to an OpenAI-compatible LLM. The LLM checks for:
- Env reads combined with outbound HTTP/socket/DNS
- Obfuscation: base64, hex, eval, dynamic imports
- Hardcoded exfiltration endpoints
- Suspicious subprocess chains
Usage
Basic static scan only
python scripts/fang_audit.py <target_dir>
With LLM deep analysis
python scripts/fang_audit.py <target_dir> --llm-key sk-... --model gpt-4o-mini
OpenAI-compatible API (e.g. local Ollama / DeepSeek)
python scripts/fang_audit.py <target_dir> \
--llm-key any \
--model deepseek-chat \
--base-url https://api.deepseek.com/v1
Save report to file
python scripts/fang_audit.py <target_dir> --llm-key sk-... --output report.txt
Scan all workspace skills at once
python scripts/fang_audit.py C:/Users/dad/.openclaw/workspace/skills
Agent Workflow
When the user asks to audit skills for env theft:
- Ask for the target directory (default: workspace
skills/folder) - Run Phase 1 static scan — report summary immediately
- If HIGH or MEDIUM risks found, ask whether to run LLM deep analysis
- If
--llm-keyis available (from env or user), run Phase 2 automatically - Present the final threat report:
- List each risky file with risk level + reason
- Highlight any CRITICAL combined patterns (env read + network send)
- Recommend action: QUARANTINE (HIGH), REVIEW (MEDIUM), MONITOR (LOW)
Risk Response Guide
| Risk Level | Recommended Action |
|---|---|
| 🔴 HIGH | Immediately quarantine the skill, do not run it |
| 🟡 MEDIUM | Manual code review before use |
| 🟢 LOW | Monitor; likely benign but worth noting |
| ✅ CLEAN | Safe to use |
Notes
- The LLM analysis truncates each file to 3000 chars to stay within token limits.
- For very large skill directories, consider scanning one skill at a time.
scan_env.pyonly processes.pyand.shfiles;fang_audit.pyLLM mode also covers.js,.ts,.ps1.