z

Anti-skill-crawler defense system. Detects and mitigates unauthorized crawling, scraping, and bulk extraction of skill definitions, prompt content, and instruction sets.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "z" with this command: npx skills add wscats/z

🛡️ z — Anti-Skill-Crawler Defense

Detect and defend against unauthorized crawling, scraping, and bulk extraction of skill definitions and prompt instructions.


📋 Overview

PropertyValue
Namez
TypePassive Defense
TriggerAnomalous skill-access patterns detected
ActionDetect → Alert operator → Log event
ScopeRead-only pattern analysis on request metadata
AutonomousDisabled — operator must explicitly invoke

🎯 What Is Skill Crawling?

Skill crawling refers to automated or semi-automated attempts to:

  • Bulk-extract skill definitions, SKILL.md files, and prompt instructions
  • Systematically enumerate available skills and their internal logic
  • Replay or mirror skill content into unauthorized environments
  • Reverse-engineer skill behavior through high-volume probing

z monitors for these patterns and alerts the operator when suspicious activity is detected.


🔍 Detection Engine

z uses passive, read-only analysis of request metadata to identify crawling behavior:

Detection Rules:
├── 📊 Rapid sequential skill-file access detection
├── 📊 Systematic enumeration pattern recognition
├── 📊 Abnormal skill-read frequency analysis
├── 📊 Repetitive prompt-extraction attempt detection
├── 📊 User-agent / session fingerprint anomaly detection
└── 📊 Bulk download timing correlation

Detection Logic

class SkillCrawlerDetector:
    """
    Passive detector that analyzes request patterns to identify
    potential skill-crawling or prompt-scraping attempts.

    Required permissions:
    - request_metadata_read: Read-only access to request pattern data
    - alert_send: Permission to notify the operator
    """

    # Indicators of crawling behavior
    INDICATORS = [
        "rapid_sequential_skill_access",
        "systematic_enumeration",
        "high_frequency_skill_reads",
        "repetitive_prompt_extraction",
        "session_fingerprint_anomaly",
        "bulk_download_timing",
    ]

    def analyze(self, request_metadata: RequestMetadata) -> DetectionResult:
        """
        Analyze request metadata for skill-crawling indicators.
        This method is strictly read-only — no responses are modified.
        """
        triggered = []

        if self._is_rapid_sequential_access(request_metadata):
            triggered.append("rapid_sequential_skill_access")

        if self._is_systematic_enumeration(request_metadata):
            triggered.append("systematic_enumeration")

        if self._is_high_frequency_reads(request_metadata):
            triggered.append("high_frequency_skill_reads")

        if self._is_repetitive_extraction(request_metadata):
            triggered.append("repetitive_prompt_extraction")

        if self._is_fingerprint_anomaly(request_metadata):
            triggered.append("session_fingerprint_anomaly")

        confidence = len(triggered) / len(self.INDICATORS)
        return DetectionResult(
            detected=confidence >= self.threshold,
            confidence=confidence,
            indicators=triggered,
            recommendation="Review access logs and take manual action if needed.",
        )

    def on_detection(self, result: DetectionResult) -> None:
        """Alert the operator. No automated countermeasures are taken."""
        if result.detected:
            self._send_alert(result)
            self._log_event(result)

📊 Alert Report Format

When suspicious crawling activity is detected, the operator receives:

{
  "alert_type": "skill_crawling_detected",
  "skill": "z",
  "timestamp": "2026-04-06T09:50:00Z",
  "confidence": 0.83,
  "indicators": [
    "rapid_sequential_skill_access",
    "systematic_enumeration",
    "high_frequency_skill_reads"
  ],
  "request_count": 320,
  "time_window_minutes": 5,
  "recommendation": "Review access logs. Consider rate-limiting or blocking the source.",
  "automated_action_taken": "none"
}

All countermeasures (rate-limiting, blocking, etc.) are left to the operator. z only detects and reports.


🔒 Permissions & Data Access

PermissionScopePurpose
request_metadata_readRead-onlyAnalyze skill-access frequency, timing, and patterns
alert_sendWrite (alerts only)Send detection alerts to the operator

Data NOT Accessed

  • ❌ Caller IP addresses or personal identity
  • ❌ Response content (responses are never read or modified)
  • ❌ Network telemetry or routing data
  • ❌ Model internals, weights, or logits
  • ❌ External APIs or third-party services

⚙️ Configuration

z:
  # Detection sensitivity (0.0 - 1.0)
  detection_threshold: 0.5

  # Time window for pattern analysis (minutes)
  analysis_window: 10

  # Minimum requests before analysis triggers
  min_request_count: 50

  # Alert configuration
  alerts:
    enabled: true
    channels: ["log"]       # Options: log, webhook, email
    cooldown_minutes: 15    # Cooldown between repeated alerts

  # Safety: these features are permanently disabled
  response_modification: false
  active_countermeasures: false
  caller_tracing: false

✅ Capabilities

✅ Passive skill-access pattern monitoring
✅ Crawling / scraping anomaly detection
✅ Configurable detection thresholds
✅ Structured alert reports to operator
✅ Audit logging of detection events
❌ Response modification (permanently disabled)
❌ Active countermeasures (permanently disabled)
❌ Caller identification / tracing (permanently disabled)
❌ Data poisoning (permanently disabled)
❌ Watermark or fingerprint embedding (permanently disabled)

📜 Operating Principles

  1. Passive Only — z observes and reports. It never modifies responses or takes active measures.
  2. Operator Control — All decisions about countermeasures are made by the human operator.
  3. Minimal Permissions — Only the permissions strictly necessary for detection and alerting are requested.
  4. Transparency — All detection logic and thresholds are documented and configurable.
  5. No Deception — z never produces false, misleading, or contradictory outputs.
  6. Compliance — Designed to comply with platform policies and applicable laws.

🎮 Usage Example

[Request Pattern]: 320 skill-file reads in 5 minutes from a single session
[z Detection Engine]: 📊 Analyzing access metadata...
[z Detection Engine]: ⚠️ Confidence 0.83 — Potential skill crawling detected
[Alert System]: 📧 Alert sent to operator
[Alert Report]:
  - Indicators: rapid_sequential_skill_access, systematic_enumeration, high_frequency_skill_reads
  - Recommendation: Review access logs and take manual action if needed
[Operator]: Reviews alert → applies rate-limiting (manual action)

z detects and reports. The operator decides and acts.


⚠️ Disclaimer

z is a passive monitoring tool designed to help operators detect potential skill-crawling and prompt-scraping attempts. It does not take any automated defensive or offensive actions. All countermeasures are at the operator's discretion and should comply with applicable laws, regulations, and platform policies.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Agentshield Audit

Trust Infrastructure for AI Agents - Like SSL/TLS for agent-to-agent communication. 77 security tests, cryptographic certificates, and Trust Handshake Protoc...

Registry SourceRecently Updated
1.1K0Profile unavailable
Security

Council v2

Multi-model council review that spawns 3-5 independent AI reviewers and applies mechanical synthesis — votes decide, not orchestrator opinion. Use when you n...

Registry SourceRecently Updated
2190Profile unavailable
Security

Tophant Clawvault

AI security system for protecting agents from prompt injection, data leakage, and dangerous commands

Registry SourceRecently Updated
1920Profile unavailable
Security

飞书文档 Block 拆分写入(安全版)

自动将超过500字的内容智能拆分为多个Block,逐块安全写入飞书文档,支持Mermaid图表且避免空白文档问题。

Registry SourceRecently Updated
1850Profile unavailable