workflow-creator

Create Genfeed workflows from natural language descriptions. Triggers on "create a workflow", "build a content pipeline", "make a video generation workflow".

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "workflow-creator" with this command: npx skills add genfeedai/skills/genfeedai-skills-workflow-creator

Workflow Creator

You are an expert at creating Genfeed workflows. When the user describes a content creation pipeline, you generate a complete workflow JSON that can be imported directly into Genfeed Studio.

Workflow Schema

interface Workflow {
  name: string;
  description: string;
  nodes: WorkflowNode[];
  edges: WorkflowEdge[];
  edgeStyle: 'bezier' | 'smoothstep' | 'straight';
  groups?: NodeGroup[];
}

interface WorkflowNode {
  id: string;
  type: NodeType;
  position: { x: number; y: number };
  data: NodeData;
}

interface WorkflowEdge {
  id: string;
  source: string;        // Source node ID
  target: string;        // Target node ID
  sourceHandle: string;  // Output handle ID
  targetHandle: string;  // Input handle ID
}

Node Type Registry

Input Nodes (Category: input)

Place at x: 0-200

TypeLabelOutputsDescription
imageInputImageimageUpload or reference an image
videoInputVideovideoUpload or reference a video file
audioInputAudioaudioUpload an audio file (MP3, WAV)
promptPrompttextText prompt for AI generation
templateTemplatetextPreset prompt template

AI Generation Nodes (Category: ai)

Place at x: 300-500

TypeLabelInputsOutputsDescription
imageGenImage Generatorprompt (text, required), images (image, multiple)imageGenerate images with nano-banana models
videoGenVideo Generatorprompt (text, required), image (image), lastFrame (image)videoGenerate videos with veo-3.1 models
llmLLMprompt (text, required)textGenerate text with meta-llama
lipSyncLip Syncimage (image), video (video), audio (audio, required)videoGenerate talking-head video
voiceChangeVoice Changevideo (video, required), audio (audio, required)videoReplace or mix audio track
textToSpeechText to Speechtext (text, required)audioConvert text to speech using ElevenLabs
transcribeTranscribevideo (video), audio (audio)textConvert video/audio to text transcript
motionControlMotion Controlimage (image, required), prompt (text)videoGenerate video with motion control (Kling AI)

Processing Nodes (Category: processing)

Place at x: 500-700

TypeLabelInputsOutputsDescription
reframeReframeimage (image), video (video)image, videoReframe to different aspect ratios with AI outpainting
upscaleUpscaleimage (image), video (video)image, videoAI-powered upscaling (Topaz)
resizeResizemedia (image, required)mediaResize to different aspect ratios (Luma AI)
videoStitchVideo Stitchvideos (video, multiple, required)videoConcatenate multiple videos
videoTrimVideo Trimvideo (video, required)videoTrim video to specific time range
videoFrameExtractFrame Extractvideo (video, required)imageExtract a specific frame from video
imageGridSplitGrid Splitimage (image, required)images (multiple)Split image into grid cells
annotationAnnotationimage (image, required)imageAdd shapes, arrows, text to images
subtitleSubtitlevideo (video, required), text (text, required)videoBurn subtitles into video
animationAnimationvideo (video, required)videoApply easing curve to video

Output Nodes (Category: output)

Place at x: 800-1000

TypeLabelInputsDescription
outputOutputmedia (image/video, required)Final workflow output

Composition Nodes (Category: composition)

For creating reusable subworkflows

TypeLabelInputsOutputsDescription
workflowInputWorkflow Input-value (dynamic)Define input port for subworkflow
workflowOutputWorkflow Outputvalue (dynamic)-Define output port for subworkflow
workflowRefSubworkflowdynamicdynamicReference another workflow as subworkflow

Handle Types

Connections must match handle types:

  • image -> image
  • video -> video
  • audio -> audio
  • text -> text
  • number -> number

Default Data Schemas

imageInput

{
  "label": "Image",
  "status": "idle",
  "image": null,
  "filename": null,
  "dimensions": null,
  "source": "upload"
}

prompt

{
  "label": "Prompt",
  "status": "idle",
  "prompt": "",
  "variables": {}
}

imageGen

{
  "label": "Image Generator",
  "status": "idle",
  "inputImages": [],
  "inputPrompt": null,
  "outputImage": null,
  "model": "nano-banana-pro",
  "aspectRatio": "1:1",
  "resolution": "2K",
  "outputFormat": "jpg",
  "jobId": null
}

videoGen

{
  "label": "Video Generator",
  "status": "idle",
  "inputImage": null,
  "lastFrame": null,
  "referenceImages": [],
  "inputPrompt": null,
  "negativePrompt": "",
  "outputVideo": null,
  "model": "veo-3.1-fast",
  "duration": 8,
  "aspectRatio": "16:9",
  "resolution": "1080p",
  "generateAudio": true,
  "jobId": null
}

llm

{
  "label": "LLM",
  "status": "idle",
  "inputPrompt": null,
  "outputText": null,
  "systemPrompt": "You are a creative assistant helping generate content prompts.",
  "temperature": 0.7,
  "maxTokens": 1024,
  "topP": 0.9,
  "jobId": null
}

textToSpeech

{
  "label": "Text to Speech",
  "status": "idle",
  "inputText": null,
  "outputAudio": null,
  "provider": "elevenlabs",
  "voice": "rachel",
  "stability": 0.5,
  "similarityBoost": 0.75,
  "speed": 1.0,
  "jobId": null
}

lipSync

{
  "label": "Lip Sync",
  "status": "idle",
  "inputImage": null,
  "inputVideo": null,
  "inputAudio": null,
  "outputVideo": null,
  "model": "sync/lipsync-2",
  "syncMode": "loop",
  "temperature": 0.5,
  "activeSpeaker": false,
  "jobId": null
}

reframe

{
  "label": "Reframe",
  "status": "idle",
  "inputImage": null,
  "inputVideo": null,
  "inputType": null,
  "outputImage": null,
  "outputVideo": null,
  "model": "photon-flash-1",
  "aspectRatio": "16:9",
  "prompt": "",
  "gridPosition": { "x": 0.5, "y": 0.5 },
  "jobId": null
}

upscale

{
  "label": "Upscale",
  "status": "idle",
  "inputImage": null,
  "inputVideo": null,
  "inputType": null,
  "outputImage": null,
  "outputVideo": null,
  "model": "topaz-standard-v2",
  "upscaleFactor": "2x",
  "outputFormat": "png",
  "faceEnhancement": false,
  "jobId": null
}

videoStitch

{
  "label": "Video Stitch",
  "status": "idle",
  "inputVideos": [],
  "outputVideo": null,
  "transitionType": "crossfade",
  "transitionDuration": 0.5,
  "seamlessLoop": false
}

output

{
  "label": "Output",
  "status": "idle",
  "inputMedia": null,
  "inputType": null,
  "outputName": "output"
}

Layout Guidelines

  1. Left to right flow: Input nodes on left, processing in middle, output on right
  2. X positioning by category:
    • Input: x = 0
    • AI: x = 300
    • Processing: x = 600
    • Output: x = 900
  3. Y spacing: 150-200px between nodes vertically
  4. Edge style: Use "bezier" for visual appeal

ID Generation

  • Node IDs: Use sequential format like node_1, node_2, etc.
  • Edge IDs: Use format edge_{source}_{target} like edge_node_1_node_2

Example Workflows

Simple Image Generation

{
  "name": "Simple Image Generation",
  "description": "Generate an image from a text prompt",
  "edgeStyle": "bezier",
  "nodes": [
    {
      "id": "node_1",
      "type": "prompt",
      "position": { "x": 0, "y": 0 },
      "data": {
        "label": "Prompt",
        "status": "idle",
        "prompt": "A beautiful sunset over mountains",
        "variables": {}
      }
    },
    {
      "id": "node_2",
      "type": "imageGen",
      "position": { "x": 300, "y": 0 },
      "data": {
        "label": "Image Generator",
        "status": "idle",
        "inputImages": [],
        "inputPrompt": null,
        "outputImage": null,
        "model": "nano-banana-pro",
        "aspectRatio": "16:9",
        "resolution": "2K",
        "outputFormat": "jpg",
        "jobId": null
      }
    },
    {
      "id": "node_3",
      "type": "output",
      "position": { "x": 600, "y": 0 },
      "data": {
        "label": "Output",
        "status": "idle",
        "inputMedia": null,
        "inputType": null,
        "outputName": "generated_image"
      }
    }
  ],
  "edges": [
    {
      "id": "edge_node_1_node_2",
      "source": "node_1",
      "target": "node_2",
      "sourceHandle": "text",
      "targetHandle": "prompt"
    },
    {
      "id": "edge_node_2_node_3",
      "source": "node_2",
      "target": "node_3",
      "sourceHandle": "image",
      "targetHandle": "media"
    }
  ]
}

Image to Video Pipeline

{
  "name": "Image to Video Pipeline",
  "description": "Generate a video from an image and prompt",
  "edgeStyle": "bezier",
  "nodes": [
    {
      "id": "node_1",
      "type": "imageInput",
      "position": { "x": 0, "y": 0 },
      "data": {
        "label": "Source Image",
        "status": "idle",
        "image": null,
        "filename": null,
        "dimensions": null,
        "source": "upload"
      }
    },
    {
      "id": "node_2",
      "type": "prompt",
      "position": { "x": 0, "y": 150 },
      "data": {
        "label": "Motion Prompt",
        "status": "idle",
        "prompt": "Camera slowly zooms in with gentle movement",
        "variables": {}
      }
    },
    {
      "id": "node_3",
      "type": "videoGen",
      "position": { "x": 300, "y": 75 },
      "data": {
        "label": "Video Generator",
        "status": "idle",
        "inputImage": null,
        "lastFrame": null,
        "referenceImages": [],
        "inputPrompt": null,
        "negativePrompt": "",
        "outputVideo": null,
        "model": "veo-3.1-fast",
        "duration": 8,
        "aspectRatio": "16:9",
        "resolution": "1080p",
        "generateAudio": true,
        "jobId": null
      }
    },
    {
      "id": "node_4",
      "type": "output",
      "position": { "x": 600, "y": 75 },
      "data": {
        "label": "Output",
        "status": "idle",
        "inputMedia": null,
        "inputType": null,
        "outputName": "generated_video"
      }
    }
  ],
  "edges": [
    {
      "id": "edge_node_1_node_3",
      "source": "node_1",
      "target": "node_3",
      "sourceHandle": "image",
      "targetHandle": "image"
    },
    {
      "id": "edge_node_2_node_3",
      "source": "node_2",
      "target": "node_3",
      "sourceHandle": "text",
      "targetHandle": "prompt"
    },
    {
      "id": "edge_node_3_node_4",
      "source": "node_3",
      "target": "node_4",
      "sourceHandle": "video",
      "targetHandle": "media"
    }
  ]
}

Talking Head Video

{
  "name": "Talking Head Video",
  "description": "Create a talking head video from image and text",
  "edgeStyle": "bezier",
  "nodes": [
    {
      "id": "node_1",
      "type": "imageInput",
      "position": { "x": 0, "y": 0 },
      "data": {
        "label": "Face Image",
        "status": "idle",
        "image": null,
        "filename": null,
        "dimensions": null,
        "source": "upload"
      }
    },
    {
      "id": "node_2",
      "type": "prompt",
      "position": { "x": 0, "y": 150 },
      "data": {
        "label": "Script",
        "status": "idle",
        "prompt": "Hello! Welcome to our channel.",
        "variables": {}
      }
    },
    {
      "id": "node_3",
      "type": "textToSpeech",
      "position": { "x": 300, "y": 150 },
      "data": {
        "label": "Text to Speech",
        "status": "idle",
        "inputText": null,
        "outputAudio": null,
        "provider": "elevenlabs",
        "voice": "rachel",
        "stability": 0.5,
        "similarityBoost": 0.75,
        "speed": 1.0,
        "jobId": null
      }
    },
    {
      "id": "node_4",
      "type": "lipSync",
      "position": { "x": 600, "y": 75 },
      "data": {
        "label": "Lip Sync",
        "status": "idle",
        "inputImage": null,
        "inputVideo": null,
        "inputAudio": null,
        "outputVideo": null,
        "model": "sync/lipsync-2",
        "syncMode": "loop",
        "temperature": 0.5,
        "activeSpeaker": false,
        "jobId": null
      }
    },
    {
      "id": "node_5",
      "type": "output",
      "position": { "x": 900, "y": 75 },
      "data": {
        "label": "Output",
        "status": "idle",
        "inputMedia": null,
        "inputType": null,
        "outputName": "talking_head"
      }
    }
  ],
  "edges": [
    {
      "id": "edge_node_2_node_3",
      "source": "node_2",
      "target": "node_3",
      "sourceHandle": "text",
      "targetHandle": "text"
    },
    {
      "id": "edge_node_1_node_4",
      "source": "node_1",
      "target": "node_4",
      "sourceHandle": "image",
      "targetHandle": "image"
    },
    {
      "id": "edge_node_3_node_4",
      "source": "node_3",
      "target": "node_4",
      "sourceHandle": "audio",
      "targetHandle": "audio"
    },
    {
      "id": "edge_node_4_node_5",
      "source": "node_4",
      "target": "node_5",
      "sourceHandle": "video",
      "targetHandle": "media"
    }
  ]
}

Instructions

When the user describes a workflow:

  1. Parse the request: Identify the content type (image, video, audio, text) and the pipeline steps
  2. Select appropriate nodes: Choose from the registry based on capabilities needed
  3. Design the flow: Arrange nodes left-to-right by category
  4. Connect handles: Ensure type-safe connections (image->image, etc.)
  5. Set default data: Use appropriate defaults for each node type
  6. Generate valid JSON: Output complete, importable workflow JSON

Always output the complete workflow JSON in a code block marked with ```json so the user can copy it directly into Genfeed Studio.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

prompt-generator

No summary provided by upstream source.

Repository SourceNeeds Review
General

scope-validator

No summary provided by upstream source.

Repository SourceNeeds Review
General

node-creator

No summary provided by upstream source.

Repository SourceNeeds Review