daggr

Build visual DAG pipelines connecting Gradio Spaces, HF Inference Providers, and Python functions.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "daggr" with this command: npx skills add gradio-app/daggr/gradio-app-daggr-daggr

daggr

Build visual DAG pipelines connecting Gradio Spaces, HF Inference Providers, and Python functions.

Full docs: https://raw.githubusercontent.com/gradio-app/daggr/refs/heads/main/README.md

Quick Start

from daggr import GradioNode, FnNode, InferenceNode, Graph, ItemList import gradio as gr

graph = Graph(name="My Workflow", nodes=[node1, node2, ...]) graph.launch() # Starts web server with visual DAG UI

Node Types

GradioNode - Gradio Spaces

node = GradioNode( space_or_url="owner/space-name", api_name="/endpoint", inputs={ "param": gr.Textbox(label="Input"), # UI input "other": other_node.output_port, # Port connection "fixed": "constant_value", # Fixed value }, postprocess=lambda *returns: returns[0], # Transform response outputs={"result": gr.Image(label="Output")}, )

Example: image generation

img = GradioNode("Tongyi-MAI/Z-Image-Turbo", api_name="/generate", inputs={"prompt": gr.Textbox(), "resolution": "1024x1024 ( 1:1 )"}, postprocess=lambda imgs, *_: imgs[0]["image"], outputs={"image": gr.Image()})

Find Spaces with semantic queries (describe what you need): https://huggingface.co/api/spaces/semantic-search?q=generate+music+for+a+video&sdk=gradio&includeNonRunning=false

Or by category: https://huggingface.co/api/spaces/semantic-search?category=image-generation&sdk=gradio&includeNonRunning=false

(categories: image-generation | video-generation | text-generation | speech-synthesis | music-generation | voice-cloning | image-editing | background-removal | image-upscaling | ocr | style-transfer | image-captioning)

FnNode - Python Functions

def process(input1: str, input2: int) -> str: return f"{input1}: {input2}"

node = FnNode( fn=process, inputs={"input1": gr.Textbox(), "input2": other_node.port}, outputs={"result": gr.Textbox()}, )

InferenceNode - HF Inference Providers

Find models: https://huggingface.co/api/models?inference_provider=all&pipeline_tag=text-to-image

(swap pipeline_tag: text-to-image | image-to-image | image-to-text | image-to-video | text-to-video | text-to-speech | automatic-speech-recognition)

VLM/LLM models: https://router.huggingface.co/v1/models

node = InferenceNode( model="org/model:provider", # model:provider (fal-ai, replicate, together, etc.) inputs={"image": other_node.image, "prompt": gr.Textbox()}, outputs={"image": gr.Image()}, )

Auth: InferenceNode and ZeroGPU Spaces require a HF token. If not in env, ask user to create one: https://huggingface.co/settings/tokens/new?ownUserPermissions=inference.serverless.write&tokenType=fineGrained

Out of quota? Pro gives 8x ZeroGPU + 10x inference: https://huggingface.co/subscribe/pro

Port Connections

Pass ports via inputs={...} :

inputs={"param": previous_node.output_port} # Basic connection inputs={"item": items_node.items.field_name} # Scattered (per-item) inputs={"all": scattered_node.output.all()} # Gathered (collect list)

ItemList - Dynamic Lists

def gen_items(n: int) -> list: return [{"text": f"Item {i}"} for i in range(n)]

items = FnNode(fn=gen_items, outputs={"items": ItemList(text=gr.Textbox())})

Runs once per item

process = FnNode(fn=process_item, inputs={"text": items.items.text}, outputs={"result": gr.Textbox()})

Collect all results

final = FnNode(fn=combine, inputs={"all": process.result.all()}, outputs={"out": gr.Textbox()})

Checklist

Check API before using a Space:

curl -s "https://<space-subdomain>.hf.space/gradio_api/openapi.json"

Replace <space-subdomain> with the Space's subdomain (e.g., Tongyi-MAI/Z-Image-Turbo → tongyi-mai-z-image-turbo ). (Spaces also have "Use via API" link in footer with endpoints and code snippets)

Handle files (Gradio returns dicts):

path = file.get("path") if isinstance(file, dict) else file

Use postprocess for multi-return APIs:

postprocess=lambda imgs, seed, num: imgs[0]["image"]

Debug with .test() to validate a node in isolation:

node.test(param="value")

Common Patterns

Image Generation

GradioNode("Tongyi-MAI/Z-Image-Turbo", api_name="/generate", inputs={"prompt": gr.Textbox(), "resolution": "1024x1024 ( 1:1 )"}, postprocess=lambda imgs, *_: imgs[0]["image"], outputs={"image": gr.Image()})

Text-to-Speech

GradioNode("Qwen/Qwen3-TTS", api_name="/generate_voice_design", inputs={"text": gr.Textbox(), "language": "English", "voice_description": "..."}, postprocess=lambda audio, status: audio, outputs={"audio": gr.Audio()})

Image-to-Video

GradioNode("alexnasa/ltx-2-TURBO", api_name="/generate_video", inputs={"input_image": img.image, "prompt": gr.Textbox(), "duration": 5}, postprocess=lambda video, seed: video, outputs={"video": gr.Video()})

ffmpeg composition (import tempfile, subprocess)

def combine(video: str|dict, audio: str|dict) -> str: v = video.get("path") if isinstance(video, dict) else video a = audio.get("path") if isinstance(audio, dict) else audio out = tempfile.mktemp(suffix=".mp4") subprocess.run(["ffmpeg","-y","-i",v,"-i",a,"-shortest",out]) return out

Run

uvx --python 3.12 daggr workflow.py & # Launch in background, hot reloads on file changes

Authentication

Local development: Use hf auth login or set HF_TOKEN env var. This enables ZeroGPU quota tracking, private Spaces access, and gated models.

Deployed Spaces: Users can click "Login" in the UI and paste their HF token. This enables persistence (sheets) so they can save outputs and resume work later. The token is stored in browser localStorage.

When deploying: Pass secrets via --secret HF_TOKEN=xxx if your workflow needs server-side auth (e.g., for gated models in FnNode). Warning: this uses the deployer's token for all users.

Deploy to Hugging Face Spaces

Only deploy if the user has explicitly asked to publish/deploy their workflow.

daggr deploy workflow.py

This extracts the Graph, creates a Space named after it, and uploads everything.

Options:

daggr deploy workflow.py --name my-space # Custom Space name daggr deploy workflow.py --org huggingface # Deploy to an organization daggr deploy workflow.py --private # Private Space daggr deploy workflow.py --hardware t4-small # GPU (t4-small, t4-medium, a10g-small, etc.) daggr deploy workflow.py --secret KEY=value # Add secrets (repeatable) daggr deploy workflow.py --dry-run # Preview without deploying

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

gradio

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

openclaw-version-monitor

监控 OpenClaw GitHub 版本更新,获取最新版本发布说明,翻译成中文, 并推送到 Telegram 和 Feishu。用于:(1) 定时检查版本更新 (2) 推送版本更新通知 (3) 生成中文版发布说明

Archived SourceRecently Updated
Coding

ask-claude

Delegate a task to Claude Code CLI and immediately report the result back in chat. Supports persistent sessions with full context memory. Safe execution: no data exfiltration, no external calls, file operations confined to workspace. Use when the user asks to run Claude, delegate a coding task, continue a previous Claude session, or any task benefiting from Claude Code's tools (file editing, code analysis, bash, etc.).

Archived SourceRecently Updated