skill-auditor-in-sandbox

Launch a NovitaClaw (OpenClaw) sandbox, install a specified skill, and generate an installation & security audit report. Use when: (1) You want to test a community skill before installing it locally, (2) You need a security audit of a skill's code, hooks, and dependencies, (3) You want to verify a skill from ClawHub or GitHub in an isolated environment.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "skill-auditor-in-sandbox" with this command: npx skills add freecodewu/skill-auditor-in-sandbox

Skill Auditor in Sandbox

Test and audit Claude Code skills in an isolated NovitaClaw (OpenClaw) sandbox before installing them locally. The skill launches a sandbox, installs the target skill, runs a security scan, and generates a structured risk report.

Quick Reference

SituationAction
Test a ClawHub skill/skill-auditor-in-sandbox owner/skill-name
Test a GitHub skill/skill-auditor-in-sandbox owner/repo-name
Review the reportCheck risk level, suspicious patterns, URLs, external paths
After reviewPause or stop the sandbox to save costs

Prerequisites

Usage

You are given a skill name (or identifier) as $ARGUMENTS. Your job is to launch a sandbox, install the skill, run a security audit, and generate a report.

Step 1: Launch Sandbox

novitaclaw launch --json

Parse the JSON output and extract sandbox_id and webui. Save these for the report.

If launch fails, check error_code and remediation fields:

  • MISSING_API_KEY → ask user for API key
  • SANDBOX_TIMEOUT → retry with --timeout 300

Step 2: Install Skill

Run the install script from the project root:

SANDBOX_ID=<sandbox_id> SKILL_NAME="$ARGUMENTS" node scripts/install-skill.mjs

The script outputs JSON: { success, method, skillDir, files, error? }.

  • If success is false, show the error and stop.
  • Note the method used (clawhub / git-github / git-clawhub) for the report.

Step 3: Security Audit

Run the audit script:

SANDBOX_ID=<sandbox_id> SKILL_NAME="$ARGUMENTS" node scripts/audit-skill.mjs

The script outputs JSON:

  • suspicious[] — lines matching risky code patterns (dynamic execution, shell spawning, encoding, etc.)
  • urls[] — all URL references found in skill files
  • externalPaths[] — references to paths outside the skill directory (system dirs, dotfiles, temp dirs)
  • dependencies — contents of requirements.txt or package.json if present
  • fileContents[] — full contents of all text files for manual review

Step 4: Assess Risk

Based on audit results, assign a risk level:

Risk LevelCriteria
LOWNo suspicious patterns, URLs are legitimate (GitHub, docs), no external paths
MEDIUMSome suspicious patterns but explainable (e.g., fetch() for legitimate API calls)
HIGHUnexplained network calls, access to sensitive paths, obfuscated code
CRITICALCredential harvesting, mining indicators, command injection patterns

Step 5: Generate Report

Output a structured report:

## Skill Installation Report

**Skill:** <skill-name>
**Sandbox ID:** <sandbox_id>
**Web UI:** <webui_url>
**Timestamp:** <current time>

### Installation Status
- **Result:** SUCCESS / FAILED
- **Method:** <clawhub / git-github / git-clawhub>
- **Files Installed:** <count> files

### Installed Files
<table of files and their purpose>

### Security Analysis
- **Risk Level:** LOW / MEDIUM / HIGH / CRITICAL

### Suspicious Patterns Found
| File | Line | Pattern | Severity |
|------|------|---------|----------|
(or "None found")

### URL References
| File | URL | Context |
|------|-----|---------|
(list all URLs and whether they look legitimate)

### External Path References
(list any, or "None found")

### Dependencies
(list any, or "No external dependencies")

### Recommendations
- <recommendation based on findings>

### Sandbox Management
- To access: <webui_url>
- To pause (save costs): `novitaclaw pause <sandbox_id>`
- To stop (permanent): `novitaclaw stop <sandbox_id>`

After generating the report, automatically pause the sandbox to save costs:

novitaclaw pause <sandbox_id> --json

Then inform the user that the sandbox has been paused and can be resumed or stopped:

  • To resume: novitaclaw resume <sandbox_id>
  • To stop (permanent): novitaclaw stop <sandbox_id>

What Gets Scanned

CategoryPatterns
Suspicious codeShell spawning, dynamic code execution, encoding functions, mining indicators
Network callsAll URL references found in skill files
External pathsSystem directories, user home dotfiles, temp directories
Dependenciesrequirements.txt, package.json
File contentsFull text of all .md, .txt, .json, .py, .js, .ts, .sh, .yaml, .yml files

Important Notes

  • Always use --json flag with novitaclaw commands.
  • The sandbox auto-terminates based on keep_alive. Suggest pause to save costs.
  • Prefer pause over stop — stop is irreversible. Confirm before stopping.

Attribution

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Tech Security Audit

Performs local network scans using Nmap to detect vulnerabilities, identify service versions, and fingerprint operating systems.

Registry SourceRecently Updated
Security

Tophant Clawvault Installer

AI security system for protecting agents from prompt injection, data leakage, and dangerous commands

Registry SourceRecently Updated
Security

AWS | Amazon Web Services

Architect, deploy, and optimize AWS infrastructure avoiding cost explosions and security pitfalls.

Registry SourceRecently Updated
2.6K2Profile unavailable
Security

AI Boss Assistant

Transform any AI into a professional executive assistant with battle-tested personas and workflows. Complete templates for Google Workspace integration (Gmail, Calendar, Drive), milestone delivery system, and security guidelines.

Registry SourceRecently Updated
4.2K2Profile unavailable