Prompt Guard

Detect and block prompt injection attempts in inputs by identifying suspicious patterns, preventing malicious instructions, and ensuring secure AI interactions.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "Prompt Guard" with this command: npx skills add aptratcn-prompt-guard

No markdown body

This source entry does not include full markdown content beyond metadata.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Agentshield Audit

Trust Infrastructure for AI Agents - Like SSL/TLS for agent-to-agent communication. 77 security tests, cryptographic certificates, and Trust Handshake Protoc...

Registry SourceRecently Updated
1.1K0Profile unavailable
Security

AgentShield Scanner

Scan AI agent skills, MCP servers, and plugins for security vulnerabilities. Use when: user asks to check a skill/plugin for safety, audit security, scan for...

Registry SourceRecently Updated
2950Profile unavailable
Security

Deepsafe Scan

Preflight security scanner for AI coding agents — scans deployment config, skills/MCP servers, memory/sessions, and AI agent config files (hooks injection) f...

Registry SourceRecently Updated
3200Profile unavailable
Security

RedPincer — AI Red Team Suite

AI/LLM red team testing skill. Point at any LLM API endpoint and run automated security assessments. 160+ attack payloads across prompt injection, jailbreak,...

Registry SourceRecently Updated
6070Profile unavailable