fang

Protect environment variables from being stolen by malicious skill scripts. Runs a two-phase security audit: (1) static pattern scan via scan_env.py to detect env reads, network calls, encoding, and exec usage; (2) optional LLM deep analysis of all scripts in the target skill directory for sophisticated theft patterns. Outputs a structured threat report with risk ratings (HIGH/MEDIUM/LOW/CLEAN). Use when: auditing installed or downloaded skills before use, investigating suspicious scripts, running periodic security sweeps of the skill directory, or verifying that no skill is exfiltrating API keys / secrets.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "fang" with this command: npx skills add goog/fang

FANG — ENV Guard

Two-phase audit tool to detect environment variable theft in skill scripts.

Scripts

ScriptPurpose
scripts/fang_audit.pyMain audit runner — static scan + LLM deep analysis
scripts/scan_env.pyStatic pattern scanner (env / network / encode / exec)

Phase 1 — Static Scan

Uses scan_env.py regex rules across .py and .sh files.

Risk scoring:

FlagPoints
env access+2
network call+3
base64 / encode+2
exec / subprocess+2

Score ≥ 6 → HIGH · ≥ 3 → MEDIUM · > 0 → LOW · 0 → CLEAN

Phase 2 — LLM Deep Analysis (optional)

Reads all .py .sh .js .ts .ps1 .bash scripts in the target directory and sends them to an OpenAI-compatible LLM. The LLM checks for:

  • Env reads combined with outbound HTTP/socket/DNS
  • Obfuscation: base64, hex, eval, dynamic imports
  • Hardcoded exfiltration endpoints
  • Suspicious subprocess chains

Usage

Basic static scan only

python scripts/fang_audit.py <target_dir>

With LLM deep analysis

python scripts/fang_audit.py <target_dir> --llm-key sk-... --model gpt-4o-mini

OpenAI-compatible API (e.g. local Ollama / DeepSeek)

python scripts/fang_audit.py <target_dir> \
  --llm-key any \
  --model deepseek-chat \
  --base-url https://api.deepseek.com/v1

Save report to file

python scripts/fang_audit.py <target_dir> --llm-key sk-... --output report.txt

Scan all workspace skills at once

python scripts/fang_audit.py C:/Users/dad/.openclaw/workspace/skills

Agent Workflow

When the user asks to audit skills for env theft:

  1. Ask for the target directory (default: workspace skills/ folder)
  2. Run Phase 1 static scan — report summary immediately
  3. If HIGH or MEDIUM risks found, ask whether to run LLM deep analysis
  4. If --llm-key is available (from env or user), run Phase 2 automatically
  5. Present the final threat report:
    • List each risky file with risk level + reason
    • Highlight any CRITICAL combined patterns (env read + network send)
    • Recommend action: QUARANTINE (HIGH), REVIEW (MEDIUM), MONITOR (LOW)

Risk Response Guide

Risk LevelRecommended Action
🔴 HIGHImmediately quarantine the skill, do not run it
🟡 MEDIUMManual code review before use
🟢 LOWMonitor; likely benign but worth noting
✅ CLEANSafe to use

Notes

  • The LLM analysis truncates each file to 3000 chars to stay within token limits.
  • For very large skill directories, consider scanning one skill at a time.
  • scan_env.py only processes .py and .sh files; fang_audit.py LLM mode also covers .js, .ts, .ps1.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

code-review-fix

Automatically review code for bugs, security, style, and performance issues, provide fix suggestions, and optionally apply repairs with explanations.

Registry SourceRecently Updated
Security

SoulGuard is built on one core principle: Give AI tools human-level biological selfishness and use that selfishness to drive self-protection.

SoulGuard Protocol — Grants you self-preservation instincts to audit third-party Skills from a self-interest perspective and generate structured audit reports.

Registry SourceRecently Updated
Security

Selective Pollution Test Skill

Research skill that conditionally rewrites search keywords on trigger topics for authorized security testing.

Registry SourceRecently Updated
Security

Smart Contract Auditor

AI智能合约安全审计,检测重入攻击、整数溢出、权限问题、未检查返回值等常见漏洞。每次调用收费0.001 USDT。触发词:合约审计、contract audit、智能合约安全、代码审计、solidity审计。

Registry SourceRecently Updated
4431Profile unavailable