kami-conflict-detection

Detect physical conflicts (fighting, shoving, scuffling) between 2+ people from RTSP camera streams or local video files. Event-driven mode: the script exits immediately upon detecting a conflict (exit code 10), outputting alert JSON to stdout. OpenClaw reads the alert, reports to the user in chat, then automatically restarts the script for continuous monitoring.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "kami-conflict-detection" with this command: npx skills add 13681882136/kami-conflict-detection

Kami Conflict Detection

Detect physical conflicts (fighting, shoving, scuffling) between 2+ people from RTSP camera streams or local video files. Uses an event-driven architecture where OpenClaw schedules the script in a loop for continuous real-time monitoring.

Execution Architecture (Event-Driven Loop)

┌──────────────────────────────────────────────────────────┐
│                   OpenClaw Scheduling Loop                │
│                                                          │
│  1. OpenClaw starts the script                           │
│     └─ .venv/bin/python conflict_detector_last.py        │
│                                                          │
│  2. Script continuously monitors video stream            │
│     ├─ YOLO: count persons in frame                      │
│     ├─ >= min_persons? Collect multiple frames           │
│     ├─ Send frames to LLM API for conflict analysis      │
│     └─ Conflict detected?                                │
│          ├─ No  → continue monitoring                    │
│          └─ Yes → save video clip, output JSON, exit(10) │
│                                                          │
│  3. OpenClaw reads stdout alert JSON                     │
│     └─ Reports detection result to user in chat window   │
│                                                          │
│  4. OpenClaw automatically restarts script → back to 1   │
└──────────────────────────────────────────────────────────┘

Detection Pipeline

  1. YOLO pre-filter — lightweight person detection to count people in frame (must be >= 2)
  2. Multi-frame collection — collect N frames with configurable time gap
  3. LLM conflict analysis — send frames to Kami detection API for violence/conflict judgment
  4. Event-triggered exit — on conflict detection, save video clip, output alert JSON, exit with code 10

When to Use

Use this skill when the user wants to:

  • Monitor a camera feed for physical fights or scuffles
  • Detect shoving, pushing, or violent behavior between people
  • Run conflict detection on a local video file for testing
  • Set up automated surveillance alerts for physical altercations

Installation

bash setup.sh

This will:

  1. Detect system Python, create .venv/ virtual environment
  2. Install dependencies: onnxruntime, opencv-python-headless, numpy, requests
  3. Create alerts/ output directory

Idempotent — safe to run repeatedly.

Prerequisites

  • python3 and python3-venv installed on the system
  • yolov8s-worldv2.onnx model file in the skill directory
  • RTSP camera online and network-reachable, OR a local video file for testing
  • Kami API key (via --kami_api_key or environment variable KAMI_API_KEY). If you don't have one yet, register and obtain a key at: https://kamiclaw-skill.kamihome.com
  • setup.sh has been run at least once

Parameter Confirmation

Before running this skill, confirm the following parameters with the user:

ParameterDefaultDescription
--rtsp_url(required)RTSP camera URL or local video file path
--kami_api_key(required)Kami API key (also via env KAMI_API_KEY). Register at https://kamiclaw-skill.kamihome.com if you don't have one.
--yolo_modelyolov8s-worldv2.onnxYOLO model file path
--conf_threshold0.25YOLO confidence threshold (0.0-1.0)
--min_persons2Minimum person count to trigger LLM analysis
--sample_interval1.0How often (seconds) to run YOLO pre-filter
--multi_frame_count3Number of frames to collect for LLM analysis
--multi_frame_gap0.5Time gap (seconds) between collected frames
--buffer_seconds30Ring buffer duration (seconds) for video clip export
--clip_before5Seconds of video to include before the conflict
--clip_after5Seconds of video to include after the conflict
--output_diralerts/Directory for saved video clips
--run_time0Max single-round run time in seconds; 0 = unlimited
--fps15Video stream frame rate
--inbox_filealerts/pending.jsonlAlarm inbox file consumed by the heartbeat task to push into the chat window
--feishu_webhook(env FEISHU_WEBHOOK_URL)Feishu custom bot webhook URL — alarms are pushed directly to the user's phone
--feishu_secret(env FEISHU_WEBHOOK_SECRET)Feishu webhook signing secret (only if the bot has signing enabled)

Ask the user: do any parameters need to be changed?

Feishu push setup

Create a Feishu custom bot (自定义机器人) in the target group chat, copy its webhook URL, then either:

export FEISHU_WEBHOOK_URL="https://open.feishu.cn/open-apis/bot/v2/hook/xxxxxxxx"
# Optional, only if the bot has "签名校验" enabled:
export FEISHU_WEBHOOK_SECRET="your_secret_here"

Or pass --feishu_webhook / --feishu_secret on the CLI. When set, every conflict alarm is POSTed to Feishu as an interactive card (title / description / timestamp / clip duration / clip path). If the env var is empty, the Feishu channel is silently skipped — detection still works.

Alarm Delivery Channels (Tri-channel)

Alarms are delivered through three independent, redundant channels so the user is notified even if one path fails:

#ChannelTriggerConsumerGuarantee
1stdout JSON + exit(10)On every conflictOpenClaw reads stdout and reports in chat window, then restarts the scriptPrimary chat push (OpenClaw loop)
2Inbox file alerts/pending.jsonlOn every conflict (appended)Heartbeat task in space/HEARTBEAT.md polls the file and proactively pushes unreported alarms into chatFallback chat push (survives OpenClaw glitches)
3Feishu webhook (interactive card)On every conflict (if FEISHU_WEBHOOK_URL set)User's phone via Feishu botInstant phone push, regardless of LLM / chat state

Channels 2 and 3 mirror the design already proven in kami-suspicious-person. Channel 1 remains the canonical driver of the OpenClaw detect-report-restart loop.

OpenClaw Execution Instructions

OpenClaw should execute this skill using the following loop logic:

Step 1: Start Detection

.venv/bin/python conflict_detector_last.py \
  --rtsp_url <USER_RTSP_URL> \
  --kami_api_key <API_KEY> \
  [other params]

Step 2: Handle Exit Code

Exit CodeMeaningOpenClaw Action
0Normal exit (video ended, timeout, no event)Report "monitoring session ended normally" to user. Optionally restart.
10Event detected — conflict alert JSON on stdoutParse stdout JSON, report alert details to user in chat, then immediately restart the script.
1Runtime errorReport error to user, check logs at conflict_detector.log.

Step 3: Report to User

When exit code is 10, parse the stdout JSON and report to the user like:

🥊 Physical Conflict Alert A physical conflict has been detected: Two people are engaged in a physical altercation. Video clip saved to alerts/conflict_20250115_143022.mp4 (10s). Please review and take appropriate action.

Restarting monitoring...

Step 4: Restart

After reporting, immediately restart the script with the same parameters to continue monitoring.

Complete Loop Example

OpenClaw:
  1. Run: .venv/bin/python conflict_detector_last.py --rtsp_url rtsp://... --kami_api_key ...
  2. Wait for process to exit
  3. If exit_code == 10:
       - Read stdout JSON
       - Display alert message to user in chat
       - Go to step 1 (restart)
     If exit_code == 0:
       - Inform user: "Monitoring session ended normally, no conflicts detected"
       - Ask user if they want to restart
     If exit_code == 1:
       - Report error, show log content

Usage

# Initialize environment (first time only)
bash setup.sh

# Run with RTSP stream
.venv/bin/python conflict_detector_last.py \
  --rtsp_url rtsp://127.0.0.1/live/YOUR-STREAM-ID \
  --kami_api_key YOUR-API-KEY

# Run with local video file (for testing)
.venv/bin/python conflict_detector_last.py \
  --rtsp_url /path/to/test_video.mp4 \
  --kami_api_key YOUR-API-KEY

# Custom parameters
.venv/bin/python conflict_detector_last.py \
  --rtsp_url rtsp://127.0.0.1/live/YOUR-STREAM-ID \
  --kami_api_key YOUR-API-KEY \
  --min_persons 2 \
  --clip_before 10 \
  --clip_after 10 \
  --run_time 3600

Output Format (stdout JSON)

When a conflict is detected (exit code 10), stdout contains:

{
  "alert": "conflict_detected",
  "timestamp": "2025-01-15T14:30:22.123456",
  "description": "Two people are engaged in a physical altercation",
  "video_clip": "alerts/conflict_20250115_143022.mp4",
  "clip_duration": "10s",
  "message": "Warning: Physical conflict detected. Two people are engaged in a physical altercation. Video clip saved to alerts/conflict_20250115_143022.mp4. Please review and take appropriate action."
}
FieldTypeDescription
alertstringAlways "conflict_detected"
timestampstringISO format timestamp of the alert
descriptionstringLLM-generated description of the conflict
video_clipstringPath to the saved video clip
clip_durationstringTotal duration of the saved clip
messagestringPre-formatted alert message for direct display

Exit Codes

CodeMeaning
0Normal exit (run time exceeded, video ended, user interrupt, no event detected)
10Event detected — conflict alert, JSON output on stdout
1Runtime error (model not found, stream failure, missing API key)

Architecture

RTSP/File → FrameGrabber thread (ring buffer + latest frame)
                ├─ All frames → ring buffer (last 30s)
                └─ Latest frame → main thread (every 1s)
                      ├─ YOLO: person count >= 2?
                      │    ├─ No → skip, clear frame buffer
                      │    └─ Yes → collect N frames (gap=0.5s)
                      │              → LLM conflict analysis
                      │              ├─ No conflict → continue
                      │              └─ Conflict detected:
                      │                   → wait clip_after seconds
                      │                   → export video clip from ring buffer
                      │                   → output JSON to stdout
                      │                   → exit(10)
                      └─ Timeout → exit(0)

OpenClaw drives the loop:
  run script → wait for exit → parse stdout → report to user → re-run script

Launch Checklist (Dual-channel Active Push)

This skill pushes every alarm through three redundant channels (see Alarm Delivery Channels above): stdout+exit(10), inbox file, and Feishu webhook. Before launching the detector, the agent MUST verify:

  • FEISHU_WEBHOOK_URL env var is set (or --feishu_webhook is passed). If not, warn the user that phone push is disabled — chat-window push still works via stdout+exit(10) and the inbox file.
  • alerts/ directory is writable so pending.jsonl can be appended.
  • space/HEARTBEAT.md still contains the conflict-detection inbox task; on every heartbeat, read kami-conflict-detection/alerts/pending.jsonl, if non-empty send a proactive summary to the user then MOVE (not delete) the file to alerts/consumed/<timestamp>.jsonl. If empty, reply HEARTBEAT_OK.
  • Previous run's alerts/pending.jsonl has been consumed (or archived) to avoid re-reporting stale alarms.

Never rely on tailing stdout to report alarms — the agent is only invoked on user messages / heartbeats, so real-time stdout watching is impossible.

Strict Rules (MUST Follow)

  • RULE: Alarms flow via (a) stdout+exit(10) → OpenClaw → chat, (b) inbox file → heartbeat → chat, (c) Feishu webhook → user's phone. Never rely solely on one channel.
  • RULE: Every heartbeat consumes alerts/pending.jsonl; non-empty → proactive chat summary; empty → HEARTBEAT_OK.
  • RULE: Consumed alarms are MOVED to alerts/consumed/, not deleted.
  • RULE: Verify FEISHU_WEBHOOK_URL before launch; warn the user if missing (phone push disabled, chat push still active).
  • RULE: On exit code 10, OpenClaw MUST restart the script immediately to continue monitoring.

Troubleshooting

Virtual environment not found → Run bash setup.sh

Model file missing → Place yolov8s-worldv2.onnx in the skill directory

RTSP connection failure → Verify camera is online, check --rtsp_url, confirm network connectivity

LLM API failure → Check --kami_api_key is correct, or set via export KAMI_API_KEY=your-key. If you don't have a key, register at https://kamiclaw-skill.kamihome.com. Verify network access to the detection API endpoint.

No alerts generated → Check conflict_detector.log for details. Common causes: fewer than 2 people in frame, confidence threshold too high (try lowering --conf_threshold).

Script exits immediately with code 1 → Check log for error details. Common causes: model file missing, RTSP URL unreachable, API key not set.

OpenClaw restart loop too fast → If the script keeps detecting events immediately on restart, consider increasing --sample_interval or adding a brief delay between restarts.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Sharpagent Content Safety

SharpAgent Content Safety Engine — Pluggable multi-jurisdiction content policy enforcer. Blocks, flags, or passes content based on loaded rule sets. Supports...

Registry SourceRecently Updated
240Profile unavailable
Security

security

No summary provided by upstream source.

Repository SourceNeeds Review
Security

security

No summary provided by upstream source.

Repository SourceNeeds Review
485-boshu2
Security

security

No summary provided by upstream source.

Repository SourceNeeds Review