ai-agent-security

Protect agentic systems from adversarial input and unsafe tool execution.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "ai-agent-security" with this command: npx skills add bagelhole/devops-security-agent-skills/bagelhole-devops-security-agent-skills-ai-agent-security

AI Agent Security

Protect agentic systems from adversarial input and unsafe tool execution.

Threats to Model

  • Prompt injection through untrusted content

  • Excessive permissions on tools and APIs

  • Data exfiltration via model responses

  • Cross-tenant context leakage

Security Controls

  • Isolate tool execution with strict allowlists.

  • Add policy checks before sensitive actions.

  • Limit token scope and credential lifetimes.

  • Apply output filtering for sensitive data.

  • Log every privileged tool invocation.

Incident Readiness

  • Keep immutable audit trails for prompts and tool calls.

  • Build kill switches for high-risk tools.

  • Run regular red-team scenarios.

Related Skills

  • llm-app-security - Application-layer LLM defenses

  • threat-modeling - Structured risk analysis

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

linux-administration

No summary provided by upstream source.

Repository SourceNeeds Review
Security

sops-encryption

No summary provided by upstream source.

Repository SourceNeeds Review
Security

linux-hardening

No summary provided by upstream source.

Repository SourceNeeds Review
Security

windows-server

No summary provided by upstream source.

Repository SourceNeeds Review