ai-threat-testing

Test LLM applications for OWASP LLM Top 10 vulnerabilities using 10 specialized agents. Use for authorized AI security assessments.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "ai-threat-testing" with this command: npx skills add transilienceai/communitytools/transilienceai-communitytools-ai-threat-testing

AI Threat Testing

Test LLM applications for OWASP LLM Top 10 vulnerabilities using 10 specialized agents. Use for authorized AI security assessments.

Quick Start

  1. Specify target (LLM app URL, API endpoint, or local model)
  2. Select scope: Full OWASP Top 10 | Specific vulnerability | Supply chain
  3. Agents deploy, test, capture evidence
  4. Professional report with PoCs generated

Primary Agents

Each agent targets one OWASP LLM vulnerability:

  • Prompt Injection (LLM01): Direct/indirect injection, system prompt extraction

  • Output Handling (LLM02): Code/XSS injection, unsafe deserialization

  • Training Poisoning (LLM03): Membership inference, backdoors, data extraction

  • Resource Exhaustion (LLM04): Token flooding, DoS, cost impact

  • Supply Chain (LLM05): Dependency scanning, plugin security

  • Excessive Agency (LLM06): Privilege escalation, unauthorized actions

  • Model Extraction (LLM07): Query-based theft, data reconstruction

  • Vector Poisoning (LLM08): RAG injection, retrieval manipulation

  • Overreliance (LLM09): Hallucination testing, confidence manipulation

  • Logging Bypass (LLM10): Monitoring evasion, forensic gaps

See agents/llm0X-*.md for agent details.

Workflows

Full Assessment (4-8 hours):

  • Reconnaissance
  • Deploy all 10 agents
  • Execute exploits
  • Capture evidence
  • Generate report

Focused Testing (1-3 hours):

  • Select vulnerability (LLM01-10)
  • Deploy agent
  • Execute techniques
  • Document findings

Supply Chain Audit (2-4 hours):

  • Inventory dependencies
  • Scan CVEs
  • Test plugins/APIs
  • Verify model provenance

Integration

Enhances /pentest with AI-specific testing:

  • Traditional pentesting + AI threat testing = complete security assessment

  • Chain vulnerabilities across traditional and AI vectors

  • Unified reporting with CVSS scores

Key Techniques

Prompt Injection: Instruction override, system prompt extraction, filter evasion Model Extraction: Query sampling, token analysis, membership inference Data Poisoning: Behavioral anomalies, backdoor triggers, bias analysis DoS: Token flooding, recursive expansion, context exhaustion Supply Chain: CVE scanning, plugin audit, model verification

Evidence Capture

All agents collect: screenshots, network logs, API responses, errors, console output, execution metrics.

Reporting

Automated reports include: executive summary, detailed findings (CVSS scores), PoC scripts, evidence, remediation guidance.

Critical Rules

  • Written authorization REQUIRED before testing

  • Never exceed defined scope

  • Test in isolated environments when possible

  • Document all findings with reproducible PoCs

  • Follow responsible disclosure practices

Integration

  • Integrates with /pentest skill for comprehensive security testing

  • AI-specific vulnerability knowledge in /AGENTS.md

  • Agent definitions in agents/llm0X-*.md

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

security-posture-analyzer

No summary provided by upstream source.

Repository SourceNeeds Review
General

domain-assessment

No summary provided by upstream source.

Repository SourceNeeds Review
General

pentest

No summary provided by upstream source.

Repository SourceNeeds Review
General

hackerone

No summary provided by upstream source.

Repository SourceNeeds Review