AI Video Generation

Create AI videos with Sora 2, Veo 3, Seedance, Runway, and modern APIs using reliable prompt and rendering workflows.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "AI Video Generation" with this command: npx skills add ivangdavila/video-generation

Setup

On first use, read setup.md.

When to Use

User needs to generate, edit, or scale AI videos with current models and APIs. Use this skill to choose the right current model stack, write stronger motion prompts, and run reliable async video pipelines.

Architecture

User preferences persist in ~/video-generation/. See memory-template.md for setup.

~/video-generation/
├── memory.md      # Preferred providers, model routing, reusable shot recipes
└── history.md     # Optional run log for jobs, costs, and outputs

Quick Reference

TopicFile
Initial setupsetup.md
Memory templatememory-template.md
Migration guidemigration.md
Model snapshotbenchmarks.md
Async API patternsapi-patterns.md
OpenAI Sora 2openai-sora.md
Google Veo 3.xgoogle-veo.md
Runway Gen-4runway.md
Luma Rayluma.md
ByteDance Seedanceseedance.md
Klingkling.md
Viduvidu.md
Pika via Falpika.md
MiniMax Hailuominimax-hailuo.md
Replicate routingreplicate.md
Open-source local modelsopen-source-video.md
Distribution playbookpromotion.md

Core Rules

1. Resolve model aliases before API calls

Map community names to real API model IDs first. Examples: sora-2, sora-2-pro, veo-3.0-generate-001, gen4_turbo, gen4_aleph.

2. Route by task, not brand preference

TaskFirst choiceBackup
Premium prompt-only generationsora-2-proveo-3.1-generate-001
Fast drafts at lower costveo-3.1-fast-generate-001gen4_turbo
Long-form cinematic shotsgen4_alephray-2
Strong image-to-video controlveo-3.0-generate-001gen4_turbo
Multi-shot narrative consistencySeedance familyhailuo-2.3
Local privacy-first workflowsWan2.2 / HunyuanVideoCogVideoX

3. Draft cheap, finish expensive

Start with low duration and lower tier, validate motion and composition, then rerender winners with premium models or longer durations.

4. Design prompts as shot instructions

Always include subject, action, camera motion, lens style, lighting, and scene timing. For references and start/end frames, keep continuity constraints explicit.

5. Assume async and failure by default

Every provider pipeline must support queued jobs, polling/backoff, retries, cancellation, and signed-URL download before expiry.

6. Keep a fallback chain

If the preferred model is blocked or overloaded:

  1. same provider lower tier, 2) equivalent cross-provider model, 3) open model/local run.

Common Traps

  • Using nickname-only model labels in code -> avoidable API failures
  • Pushing 8-10 second generations before validating a 3-5 second draft -> wasted credits
  • Cropping after generation instead of generating native ratio -> lower composition quality
  • Ignoring prompt enhancement toggles -> tone drift across providers
  • Reusing expired output URLs -> broken export workflows
  • Treating all providers as synchronous -> stalled jobs and bad timeout handling

External Endpoints

ProviderEndpointData SentPurpose
OpenAIapi.openai.comPrompt text, optional input images/video refsSora 2 video generation
Google Vertex AIaiplatform.googleapis.comPrompt text, optional image input, generation paramsVeo 3.x generation
Runwayapi.dev.runwayml.comPrompt text, optional input mediaGen-4 generation and image-to-video
Lumaapi.lumalabs.aiPrompt text, optional keyframes/start-end imagesRay generation
Falqueue.fal.runPrompt text, optional input mediaPika and Hailuo hosted APIs
Replicateapi.replicate.comPrompt text, optional input mediaMulti-model routing and experimentation
Viduapi.vidu.comPrompt text, optional start/end/reference imagesVidu text/image/reference video APIs
Tencent MPSmps.tencentcloudapi.comPrompt text and generation parametersUnified AIGC video task APIs

No other data is sent externally.

Security & Privacy

Data that leaves your machine:

  • Prompt text
  • Optional reference images or clips
  • Requested rendering parameters (duration, resolution, aspect ratio)

Data that stays local:

  • Provider preferences in ~/video-generation/memory.md
  • Optional local job history in ~/video-generation/history.md

This skill does NOT:

  • Store API keys in project files
  • Upload media outside requested provider calls
  • Delete local assets unless the user asks

Trust

This skill can send prompts and media references to third-party AI providers. Only install if you trust those providers with your content.

Related Skills

Install with clawhub install <slug> if user confirms:

  • image-generation - Build still concepts and keyframes before video generation
  • image-edit - Prepare clean references, masks, and style frames
  • video-edit - Post-process generated clips and final exports
  • video-captions - Add subtitle and text overlay workflows
  • ffmpeg - Compose, transcode, and package production outputs

Feedback

  • If useful: clawhub star video-generation
  • Stay updated: clawhub sync

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

Xiaohongshu Ops

小红书端到端运营:账号定位、选题研究、内容生产、发布执行、数据复盘。 Use when: (1) 用户要写小红书笔记/帖子, (2) 用户说"发小红书"/"写个种草文"/"出一篇小红书", (3) 用户讨论小红书选题/热点/爆款分析/竞品对标, (4) 用户提到账号定位/人设/内容方向规划, (5) 用户要求生成...

Registry SourceRecently Updated
Automation

WeMP Ops

微信公众号全流程运营:选题→采集→写作→排版→发布→数据分析→评论管理。 Use when: (1) 用户要写公众号文章或提供了选题方向, (2) 用户说"写一篇关于XXX的文章"/"帮我写篇推文"/"出一篇稿子", (3) 用户要求采集热点/素材/竞品分析, (4) 用户提到公众号日报/周报/数据分析/阅读量/...

Registry SourceRecently Updated
Automation

agent-stock

用于股票行情查询与分析的命令行技能。用户提到 stock 命令、股票代码、最新资讯、市场概览、K 线或配置管理时调用。

Registry SourceRecently Updated