giggle-generation-video

Supports text-to-video and image-to-video (start/end frame). Use when the user needs to generate video, create short videos, or convert text to video. Use cases: (1) Generate video from text description, (2) Use reference images as start/end frame for image-to-video, (3) Customize model, aspect ratio, duration, resolution. Triggers: generate video, text-to-video, image-to-video, AI video.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "giggle-generation-video" with this command: npx skills add patches429/giggle-generation-video

简体中文 | English

Giggle Video Generation

Source: giggle-official/skills · API: giggle.pro

Generates AI videos via giggle.pro's Generation API. Supports text-to-video and image-to-video. Submit task → query when ready. No polling, no Cron, no file writes—all operations via exec.


Installation Requirements

RequirementValue
Binarypython3
EnvironmentGIGGLE_API_KEY (required; obtain from https://giggle.pro/)
Piprequests

Set GIGGLE_API_KEY before use. The script will prompt if not configured.

No Retry on Error: If script execution encounters an error, do not retry. Report the error to the user directly and stop.


Supported Models

ModelSupported Durations (s)DefaultDescription
grok6, 106Strong overall capability, recommended
grok-fast6, 106grok fast version
sora24, 8, 124OpenAI Sora 2
sora2-pro4, 8, 124Sora 2 Pro
sora2-fast10, 1510Sora 2 Fast
sora2-pro-fast10, 1510Sora 2 Pro Fast
kling255, 105Kling video model
seedance15-pro4, 8, 124Seedance Pro (with audio)
seedance15-pro-no-audio4, 8, 124Seedance Pro (no audio)
veo314, 6, 84Google Veo 3.1 (with audio)
veo31-no-audio4, 6, 84Google Veo 3.1 (no audio)
minimax2366MiniMax model
wan255, 100Wanxiang model

Note: --duration must be chosen from the model's supported durations, otherwise the API will error.


Frame Reference (Image-to-Video)

For image-to-video, --start-frame and --end-frame support three mutually exclusive formats:

MethodFormatExample
asset_idasset_id:<ID>asset_id:lkllv0yv81
urlurl:<URL>url:https://example.com/img.jpg
base64base64:<DATA>base64:iVBORw0KGgo...

Each frame parameter can only use one of these methods.


Execution Flow: Submit and Query

Video generation is asynchronous (typically 60–300 seconds). Submit a task to get task_id, then query when the user wants to check status. All commands run via exec; API key from system env.


Step 1: Submit Task

First send a message to the user: "Video generation submitted. Usually takes 1–5 minutes. You can ask me about the progress anytime."

# Text-to-video (default grok-fast)
python3 scripts/generation_api.py \
  --prompt "Camera slowly pushes forward, person smiling in frame" \
  --model grok-fast --duration 6 \
  --aspect-ratio 16:9 --resolution 720p

# Image-to-video - use asset_id as start frame
python3 scripts/generation_api.py \
  --prompt "Person slowly turns around" \
  --start-frame "asset_id:lkllv0yv81" \
  --model grok-fast --duration 6 \
  --aspect-ratio 16:9 --resolution 720p

# Image-to-video - use URL as start frame
python3 scripts/generation_api.py \
  --prompt "Scenery from still to motion" \
  --start-frame "url:https://example.com/img.jpg" \
  --model grok-fast --duration 6

# Image-to-video - both start and end frame
python3 scripts/generation_api.py \
  --prompt "Scene transition" \
  --start-frame "asset_id:abc123" \
  --end-frame "url:https://example.com/end.jpg" \
  --model grok --duration 6

Response example:

{"status": "started", "task_id": "55bf24ca-e92a-4d9b-a172-8f585a7c5969"}

Store task_id in memory (addMemory):

giggle-generation-video task_id: xxx (submitted: YYYY-MM-DD HH:mm)

Step 2: Query When User Asks

When the user asks about video progress (e.g. "is my video ready?", "progress?"), run:

python3 scripts/generation_api.py --query --task-id <task_id>

Output handling:

stdout patternAction
Plain text with video links (视频已就绪)Forward to user as-is
Plain text with errorForward to user as-is
JSON {"status": "processing", "task_id": "..."}Tell user "Still in progress, please ask again in a moment"

Link return rule: Video links in results must be full signed URLs (with Policy, Key-Pair-Id, Signature query params). Keep as-is when forwarding.


New Request vs Query Old Task

When the user initiates a new video generation request, must run Step 1 to submit a new task. Do not reuse old task_id from memory.

Only when the user explicitly asks about a previous task's progress should you query the old task_id from memory.


Parameter Reference

ParameterDefaultDescription
--promptrequiredVideo description prompt
--modelgrokSee "Supported Models" table
--durationmodel defaultMust choose from model's supported durations
--aspect-ratio16:916:9, 9:16, 1:1, 3:4, 4:3
--resolution720p480p, 720p, 1080p
--start-frame-Image-to-video start frame: asset_id:ID, url:URL, or base64:DATA
--end-frame-Image-to-video end frame, same format as start

Note: base64 parameter supports base64-encoded images. Pass the raw Base64 string directly, do not add the data:image/xxx;base64, prefix.


Interaction Guide

When the user request is vague, guide per the steps below. If the user has provided enough info, run the command directly.

Step 1: Model Selection (required)

Before generating, must introduce available models and let the user choose. Display the model list from "Supported Models" table. Wait for explicit user choice before continuing.

Step 2: Video Duration

For the chosen model, show supported duration options. Default to the model's default duration.

Step 3: Generation Mode

Question: "Do you need reference images as start/end frame?"
Options: No - text-to-video only / Yes - image-to-video (set start/end frame)

Step 4: Aspect Ratio

Question: "What aspect ratio do you need?"
Options: 16:9 - Landscape (recommended) / 9:16 - Portrait / 1:1 - Square

Step 5: Execute and Display

Follow the flow: send message → Step 1 submit → user asks → Step 2 query. Forward exec stdout to the user as-is.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

🛍️ 淘宝/京东/拼多多比价技能

商品价格全网对比技能,获取商品在淘宝(Taobao)、天猫(TMall)、京东(JD.com)、拼多多(PinDuoDuo)、抖音(Douyin)、快手(KaiShou)的最优价格、优惠券,当用户想购物或者获取优惠信息时使用。Get the best price, coupons for goods on Chi...

Registry SourceRecently Updated
71.9K
al-one
General

Deadline

Your personal Deadline assistant. Track, analyze, and manage all your content creation needs from the command line.

Registry SourceRecently Updated
General

progress-selfcheck

Provides periodic progress self-checks with Feishu notifications, task ledger management, auto-reactivation of stale tasks, and summary reporting for OpenClaw.

Registry SourceRecently Updated