eachlabs-video-edit

Edit, transform, extend, upscale, and enhance videos using EachLabs AI models. Supports lip sync, video translation, subtitle generation, audio merging, style transfer, and video extension. Use when the user wants to edit or transform existing video content.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "eachlabs-video-edit" with this command: npx skills add eachlabs/skills/eachlabs-skills-eachlabs-video-edit

EachLabs Video Edit

Edit, transform, and enhance existing videos using 25+ AI models via the EachLabs Predictions API.

Authentication

Header: X-API-Key: <your-api-key>

Set the EACHLABS_API_KEY environment variable. Get your key at eachlabs.ai.

Model Selection Guide

Video Extension

ModelSlugBest For
Veo 3.1 Extendveo3-1-extend-videoBest quality extension
Veo 3.1 Fast Extendveo3-1-fast-extend-videoFast extension
PixVerse v5 Extendpixverse-v5-extendPixVerse extension
PixVerse v4.5 Extendpixverse-v4-5-extendOlder PixVerse extension

Lip Sync & Talking Head

ModelSlugBest For
Sync Lipsync v2 Prosync-lipsync-v2-proBest lip sync quality
PixVerse Lip Syncpixverse-lip-syncPixVerse lip sync
LatentSynclatentsyncOpen-source lip sync
Video Retalkingvideo-retalkingAudio-based lip sync

Video Transformation

ModelSlugBest For
Runway Gen4 Alephrunway-gen4-alephVideo transformation
Kling O1 Video Editkling-o1-video-to-video-editAI video editing
Kling O1 V2V Referencekling-o1-video-to-video-referenceReference-based edit
ByteDance Video Stylizebytedance-video-stylizeStyle transfer
Wan v2.2 Animate Movewan-v2-2-14b-animate-moveMotion animation
Wan v2.2 Animate Replacewan-v2-2-14b-animate-replaceObject replacement

Video Upscaling & Enhancement

ModelSlugBest For
Topaz Upscale Videotopaz-upscale-videoBest quality upscale
Luma Ray 2 Video Reframeluma-dream-machine-ray-2-video-reframeVideo reframing
Luma Ray 2 Flash Reframeluma-dream-machine-ray-2-flash-video-reframeFast reframing

Audio & Subtitles

ModelSlugBest For
FFmpeg Merge Audio Videoffmpeg-api-merge-audio-videoMerge audio track
MMAudio V2mm-audio-v-2Add audio to video
MMAudiommaudioAdd audio to video
Auto Subtitleauto-subtitleGenerate subtitles
Merge Videosmerge-videosConcatenate videos

Video Translation

ModelSlugBest For
Heygen Video Translateheygen-video-translateTranslate video speech

Motion Transfer

ModelSlugBest For
Motion Fastmotion-fastFast motion transfer
Infinitalk V2Vinfinitalk-video-to-videoTalking head from video

Face Swap (Video)

ModelSlugBest For
Faceswap Videofaceswap-videoSwap face in video

Prediction Flow

  1. Check model GET https://api.eachlabs.ai/v1/model?slug=<slug> — validates the model exists and returns the request_schema with exact input parameters. Always do this before creating a prediction to ensure correct inputs.
  2. POST https://api.eachlabs.ai/v1/prediction with model slug, version "0.0.1", and input matching the schema
  3. Poll GET https://api.eachlabs.ai/v1/prediction/{id} until status is "success" or "failed"
  4. Extract the output video URL from the response

Examples

Extend a Video with Veo 3.1

curl -X POST https://api.eachlabs.ai/v1/prediction \
  -H "Content-Type: application/json" \
  -H "X-API-Key: $EACHLABS_API_KEY" \
  -d '{
    "model": "veo3-1-extend-video",
    "version": "0.0.1",
    "input": {
      "video_url": "https://example.com/video.mp4",
      "prompt": "Continue the scene with the camera slowly pulling back"
    }
  }'

Lip Sync with Sync v2 Pro

curl -X POST https://api.eachlabs.ai/v1/prediction \
  -H "Content-Type: application/json" \
  -H "X-API-Key: $EACHLABS_API_KEY" \
  -d '{
    "model": "sync-lipsync-v2-pro",
    "version": "0.0.1",
    "input": {
      "video_url": "https://example.com/talking-head.mp4",
      "audio_url": "https://example.com/new-audio.mp3"
    }
  }'

Add Subtitles

curl -X POST https://api.eachlabs.ai/v1/prediction \
  -H "Content-Type: application/json" \
  -H "X-API-Key: $EACHLABS_API_KEY" \
  -d '{
    "model": "auto-subtitle",
    "version": "0.0.1",
    "input": {
      "video_url": "https://example.com/video.mp4"
    }
  }'

Merge Audio with Video

curl -X POST https://api.eachlabs.ai/v1/prediction \
  -H "Content-Type: application/json" \
  -H "X-API-Key: $EACHLABS_API_KEY" \
  -d '{
    "model": "ffmpeg-api-merge-audio-video",
    "version": "0.0.1",
    "input": {
      "video_url": "https://example.com/video.mp4",
      "audio_url": "https://example.com/music.mp3",
      "start_offset": 0
    }
  }'

Upscale Video with Topaz

curl -X POST https://api.eachlabs.ai/v1/prediction \
  -H "Content-Type: application/json" \
  -H "X-API-Key: $EACHLABS_API_KEY" \
  -d '{
    "model": "topaz-upscale-video",
    "version": "0.0.1",
    "input": {
      "video_url": "https://example.com/low-res-video.mp4"
    }
  }'

Parameter Reference

See references/MODELS.md for complete parameter details for each model.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

poster-design-generation

No summary provided by upstream source.

Repository SourceNeeds Review
General

eachlabs-image-edit

No summary provided by upstream source.

Repository SourceNeeds Review
General

subtitle-generation

No summary provided by upstream source.

Repository SourceNeeds Review
General

eachlabs-fashion-ai

No summary provided by upstream source.

Repository SourceNeeds Review