owasp-llm-top-10

OWASP Top 10 for LLM Applications - prevention, detection, and remediation for LLM and GenAI security. Use when building or reviewing LLM apps - prompt injection, information disclosure, training/supply chain, poisoning, output handling, excessive agency, system prompt leakage, vectors/embeddings, misinformation, unbounded consumption.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "owasp-llm-top-10" with this command: npx skills add yariv1025/skills/yariv1025-skills-owasp-llm-top-10

OWASP Top 10 for LLM Applications

This skill encodes the OWASP Top 10 for Large Language Model Applications for secure LLM/GenAI design and review. References are loaded per risk. Based on OWASP Top 10 for LLM Applications 2025.

When to Read Which Reference

RiskRead
LLM01 Prompt Injectionreferences/llm01-prompt-injection.md
LLM02 Sensitive Information Disclosurereferences/llm02-sensitive-information-disclosure.md
LLM03 Training Data & Supply Chainreferences/llm03-training-data-supply-chain.md
LLM04 Data and Model Poisoningreferences/llm04-data-model-poisoning.md
LLM05 Improper Output Handlingreferences/llm05-improper-output-handling.md
LLM06 Excessive Agencyreferences/llm06-excessive-agency.md
LLM07 System Prompt Leakagereferences/llm07-system-prompt-leakage.md
LLM08 Vector and Embedding Weaknessesreferences/llm08-vector-embedding-weaknesses.md
LLM09 Misinformationreferences/llm09-misinformation.md
LLM10 Unbounded Consumptionreferences/llm10-unbounded-consumption.md

Quick Patterns

  • Treat all user and external input as untrusted; validate and sanitize LLM outputs before use (XSS, SSRF, RCE). Limit agency and tool use; protect system prompts and RAG data. Apply rate limits and cost controls.

Quick Reference / Examples

TaskApproach
Prevent prompt injectionUse delimiters, validate input, separate system/user context. See LLM01.
Protect sensitive dataFilter PII from training/prompts, apply output guards. See LLM02.
Validate LLM outputSanitize before rendering (XSS) or executing (RCE). See LLM05.
Limit agencyRequire human approval for destructive actions; scope tool permissions. See LLM06.
Control costsApply token limits, rate limiting, and budget caps. See LLM10.

Safe - delimiter and input validation:

system_prompt = """You are a helpful assistant.
<user_input>
{sanitized_user_input}
</user_input>
Answer based only on the user input above."""

Unsafe - direct concatenation (injection risk):

prompt = f"Answer this question: {user_input}"  # User can inject instructions

Output sanitization before rendering:

import html
safe_output = html.escape(llm_response)  # Prevent XSS if rendering in browser

Workflow

Load the reference for the risk you are addressing. See OWASP Top 10 for LLM Applications and genai.owasp.org for the official list.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

owasp-api-security-top-10

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

agent-dev-guardrails

No summary provided by upstream source.

Repository SourceNeeds Review
General

owasp-iot-top-10

No summary provided by upstream source.

Repository SourceNeeds Review
General

owasp-mobile-top-10

No summary provided by upstream source.

Repository SourceNeeds Review