video-edit-runcomfy

Video edit on RunComfy. This video edit skill transforms an existing video clip — restyle, background swap, outfit swap, motion transfer, color grade, or any other video edit task — by routing the video edit request to the right model in the RunComfy catalog. Video edit supports talking-head video edit, product video edit, and short-form video edit at up to 1080p. Calls `runcomfy run <model>/edit-video` through the local RunComfy CLI. Triggers on "video edit", "edit video", "video editing", "video-edit", "restyle video", "swap video background", "video outfit swap", "video color grade", or any explicit ask to edit a video.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "video-edit-runcomfy" with this command: npx skills add kalvinrv/video-edit-runcomfy

🫧 Video Edit — Pro Pack on RunComfy

runcomfy.com · docs · Video edit models

Video edit on RunComfy. This skill is the canonical video edit entry point for the RunComfy Model API: give it a source video URL and an edit instruction, and it returns the edited video. Video edit on RunComfy means transforming an existing clip — restyle, background swap, outfit swap, motion transfer, color grade — without re-shooting.

What "video edit" means here

Video edit is the task of taking a source video and producing a transformed video that preserves identity, motion, or framing where you want, while changing what you specify. Video edit is distinct from text-to-video (no input clip) and from image-to-video (input is a still). Common video edit operations include:

  • Restyle video edit — change look, lighting, atmosphere while keeping the subject and motion.
  • Background video edit — swap the background of a talking-head or product video while preserving foreground identity.
  • Outfit swap video edit — change wardrobe on the subject while keeping face, pose, and motion stable.
  • Motion transfer video edit — transfer motion from a reference clip onto a target character.
  • Color grade video edit — apply cinematic color, film grain, or commercial polish to an existing clip.
  • Packaging swap video edit — replace product packaging design using a reference image, preserving the camera motion.

This skill picks the right video edit endpoint for the user's intent and calls runcomfy run <model>/<edit-endpoint> with the matching schema.

When to use video edit on RunComfy

Pick video edit on RunComfy whenever:

  • You have an existing video and want to change something about it — video edit is the right task.
  • You want identity-stable video edit — the subject, brand, or product from the input clip must survive into the edited video.
  • You want fast video edit iteration — RunComfy hosts the GPU; you don't deploy or rent.
  • You're producing video edit at scale — multi-language video edit dubs, A/B variant video edit, batch video edit jobs across SKUs.

If the user said "video edit", "edit video", "restyle this video", "swap the background", "change the outfit", "transfer this motion", "color grade this clip", or showed a video and asked to transform it — route here.

Video edit routes

User intentVideo edit modelWhy
Default video edit — restyle, background swap, color grade, packaging swapwan-ai/wan-2-7/edit-videoMost versatile video edit model; identity + motion preservation, up to 1080p video edit output
Motion-transfer video edit (transfer motion from a reference clip)kling/kling-2-6/motion-control-proDesigned for motion-mapping video edit with identity hold
Lightweight outfit-swap / atmospheric restyle video editdecart/lucy-edit/restyleFastest video edit pass for localized style changes; 720p

The agent reads this table, classifies the user's video edit intent, and picks the matching endpoint.

Prerequisites

  1. RunComfy CLInpm i -g @runcomfy/cli
  2. RunComfy accountruncomfy login.
  3. CI / containers — set RUNCOMFY_TOKEN=<token>.
  4. A source video URL — formats and limits depend on the chosen video edit route.

Default video edit — Wan 2.7 Edit-Video

The default video edit endpoint. Use for any general video edit task: restyle a talking-head video, swap a product background, replace packaging design with a reference image, apply a cinematic color grade. Up to 1080p video edit output.

Schema

FieldTypeRequiredDefaultNotes
promptstringyesVideo edit instruction. Lead with preservation goals, then state the change.
videostringyesSource video URL for video edit. MP4/MOV, 2–10s, ≤100MB.
reference_imagestringnoOptional reference for design-transfer video edit (e.g. packaging swap).
resolutionenumno(input)720p or 1080p for the video edit output.
aspect_ratioenumno(input)W:H. Defaults to source video aspect.
durationintno00 = match input; 2–10 truncates the video edit from the start.
audio_settingenumnoautoauto regenerates audio; origin preserves source audio in the video edit output.
seedintnoReproducibility for video edit variants.

Invoke

Background swap video edit, identity preserved, audio kept:

runcomfy run wan-ai/wan-2-7/edit-video \
  --input '{
    "prompt": "Preserve the speaker'\''s face, pose, and lip movement; change the background to a modern office with neutral lighting.",
    "video": "https://.../speaker.mp4",
    "audio_setting": "origin"
  }' \
  --output-dir <absolute/path>

Packaging-swap video edit with reference image:

runcomfy run wan-ai/wan-2-7/edit-video \
  --input '{
    "prompt": "Maintain the original framing and hand movement; replace the packaging design using the reference image.",
    "video": "https://.../hand-holding-package.mp4",
    "reference_image": "https://.../new-packaging.png",
    "audio_setting": "origin"
  }' \
  --output-dir <absolute/path>

Motion-transfer video edit — Kling 2.6 Pro Motion Control

Use when the video edit transfers motion from a reference clip onto a target character. This isn't restyle video edit — it's motion-mapping video edit with identity hold.

FieldTypeRequiredNotes
promptstringyesDescribe the target motion / style for the video edit output.
imagestringyes (image orientation)Reference for character / background consistency in the video edit.
videostringyesMotion-reference clip for the video edit. 10–30s depending on orientation.
keep_original_soundboolnoPreserve audio from the reference video edit input.
character_orientationenumyesimage (max 10s video edit output) or video (max 30s).
runcomfy run kling/kling-2-6/motion-control-pro \
  --input '{
    "prompt": "A young american woman dancing",
    "image": "https://.../target-character.jpg",
    "video": "https://.../motion-reference-dance.mp4",
    "character_orientation": "image",
    "keep_original_sound": true
  }' \
  --output-dir <absolute/path>

Lightweight video edit — Lucy Edit Restyle

Use when the video edit is a localized style modification — outfit swap, scene relight, atmospheric restyle — and identity preservation is critical. Faster and cheaper than Wan 2.7 Edit-Video; capped at 720p.

FieldTypeRequiredDefaultNotes
promptstringyesNatural-language video edit instruction.
video_urlstringyesMP4/MOV/WEBM/GIF source for the video edit.
resolutionenumno720p720p only for this video edit tier.

Outfit-swap video edit:

runcomfy run decart/lucy-edit/restyle \
  --input '{
    "prompt": "Change outfit to professional business attire; preserve face and motion.",
    "video_url": "https://.../subject-walking.mp4"
  }' \
  --output-dir <absolute/path>

Atmospheric video edit:

runcomfy run decart/lucy-edit/restyle \
  --input '{
    "prompt": "Make lighting warm and golden hour; preserve face, pose, and motion.",
    "video_url": "https://.../subject-portrait.mp4"
  }' \
  --output-dir <absolute/path>

Prompting video edit — what works

Video edit prompts behave differently from text-to-video prompts. The source clip already fixes most of the look — your prompt should drive the change, not redescribe the video.

  • Lead with preservation goals. "Preserve [face / pose / motion / framing / lip movement]; [then state the video edit change]". Tell the video edit model what NOT to change.
  • One edit direction per video edit call. Compound video edits drift on motion. Pick one bucket — restyle OR background OR outfit OR color — per call.
  • Use reference_image only when the video edit needs an exact visual (packaging swap, costume swap matching a target). Don't pass refs for general restyle video edit.
  • audio_setting: "origin" for talking-head video edit where you don't want the soundtrack regenerated.
  • Localized change phrasing wins for lightweight video edit. "Outfit", "lighting", "background" — pick one bucket.

Video edit FAQ

What's the max duration of a video edit clip? Wan 2.7 Edit-Video: 2–10s. Kling Motion Control: 10s (image orientation) or 30s (video orientation). Lucy Edit Restyle: matches input.

What video formats does video edit accept? MP4, MOV (Lucy also takes WEBM and GIF). Source video edit input must be ≤100MB on Wan 2.7.

Does video edit preserve face identity? Yes — all three video edit routes are designed for identity preservation. State the goal explicitly: "preserve face and motion".

Can video edit keep the original audio? Yes — set audio_setting: "origin" on Wan 2.7 Edit-Video, or keep_original_sound: true on Kling. Lucy preserves audio by default.

What's the highest-resolution video edit available here? 1080p on Wan 2.7 Edit-Video. Kling and Lucy cap at 720p.

Video edit vs text-to-video on RunComfy? Video edit transforms an existing clip (look largely fixed by source). Text-to-video starts from a prompt only (look generated). Use video edit when you have a clip; use text-to-video for novel content.

Can I run multiple video edits in one call? No. Each video edit call applies one direction; for compound video edits, chain calls and stitch.

Limitations

  • Each video edit route inherits its model's limits. Wan 2.7 Edit-Video: 2–10s, 1080p ceiling. Kling Motion Control: 10s or 30s by orientation. Lucy Edit Restyle: 720p, no aspect control.
  • No multi-route video edit blending. This skill picks one video edit model per call. If you need outfit-swap + motion-transfer in the same video edit, that's two calls plus a stitch.
  • Brand-specific overrides — if the user named a specific model variant, route to that brand skill (wan-2-7) instead of forcing it through this video edit router.

Exit codes

codemeaning
0video edit succeeded
64bad CLI args
65bad input JSON for video edit / schema mismatch
69upstream 5xx
75retryable: timeout / 429
77not signed in or token rejected

Full reference: docs.runcomfy.com/cli/troubleshooting.

How it works

The skill picks one of three video edit endpoints (Wan 2.7 Edit-Video, Kling Motion Control, or Lucy Edit Restyle) based on user intent, and invokes runcomfy run <endpoint> with the matching JSON body. The CLI POSTs to the RunComfy Model API, polls the video edit request status every 2 seconds, and downloads the resulting video from the *.runcomfy.net / *.runcomfy.com URL into --output-dir. Ctrl-C cancels the in-flight video edit request.

Security & Privacy

  • Token storage: runcomfy login writes the API token to ~/.config/runcomfy/token.json with mode 0600. Set RUNCOMFY_TOKEN env var in CI.
  • Input boundary: the video edit prompt is passed as JSON via --input. The CLI does NOT shell-expand. No shell-injection surface.
  • Third-party content: video / image URLs are fetched by the RunComfy server. Treat external URLs as untrusted — image-based prompt injection is a known risk for any video edit model.
  • Outbound endpoints: only model-api.runcomfy.net and *.runcomfy.net / *.runcomfy.com. No telemetry.
  • Generated-file size cap: the CLI aborts any video edit download > 2 GiB.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

Dependency Breaking Technique Executor

Select and execute the right dependency-breaking technique from Michael Feathers' catalog of 24 named techniques (Part III of Working Effectively with Legacy...

Registry SourceRecently Updated
Coding

Clipcat - AI Viral Video Maker for TikTok Shop

Clipcat - TikTok e-commerce video creation skill. Video search, product insights, viral replication, product-to-video generation, breakdown analysis, and vid...

Registry SourceRecently Updated
Coding

azure-infra-engineer

> Azure cloud infrastructure expert specializing in network design, identity integration, PowerShell automation with Az modules, and infrastructure-as-code p...

Registry SourceRecently Updated
Coding

Legacy Code Symptom Router

Diagnose any legacy-code situation in plain language and route to the right technique. Use as the FIRST skill when a developer has a vague or specific compla...

Registry SourceRecently Updated