Vigil

AI agent safety guardrails for tool calls. Use when (1) you want to validate agent tool calls before execution, (2) building agents that run shell commands, file operations, or API calls, (3) adding a safety layer to any MCP server or agent framework, (4) auditing what your agents are doing. Catches destructive commands, SSRF, SQL injection, path traversal, data exfiltration, prompt injection, and credential leaks. Zero dependencies, under 2ms.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "Vigil" with this command: npx skills add vigil

No markdown body

This source entry does not include full markdown content beyond metadata.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Yabbie Net

A safety net for AI agents. Catches unsafe tool calls before they execute.

Registry SourceRecently Updated
1130Profile unavailable
Security

Oraclenet Mesh

OracleNet is a mesh capability router for autonomous agents. Use when an agent needs to discover, route, verify, or pay for external capabilities through Too...

Registry SourceRecently Updated
1390Profile unavailable
Security

AxonFlow Governance Policies

Govern OpenClaw with AxonFlow — block dangerous commands, detect PII, prevent data exfiltration, protect agent config files, explain policy decisions, grant...

Registry SourceRecently Updated
2371Profile unavailable
Security

Safety Checks

Verify before you trust — model pinning, fallbacks, and runtime safety validation

Registry SourceRecently Updated
7170Profile unavailable