DeerFlow Skill
Communicate with a running DeerFlow instance via its HTTP API. DeerFlow is an AI agent platform built on LangGraph that orchestrates sub-agents for research, code execution, web browsing, and more.
Architecture
DeerFlow exposes two API surfaces behind an Nginx reverse proxy:
Service Direct Port Via Proxy Purpose
Gateway API 8001 $DEERFLOW_GATEWAY_URL
REST endpoints (models, skills, memory, uploads)
LangGraph API 2024 $DEERFLOW_LANGGRAPH_URL
Agent threads, runs, streaming
Environment Variables
All URLs are configurable via environment variables. Read these env vars before making any request.
Variable Default Description
DEERFLOW_URL
Unified proxy base URL
DEERFLOW_GATEWAY_URL
${DEERFLOW_URL}
Gateway API base (models, skills, memory, uploads)
DEERFLOW_LANGGRAPH_URL
${DEERFLOW_URL}/api/langgraph
LangGraph API base (threads, runs)
When making curl calls, always resolve the URL like this:
Resolve base URLs from env (do this FIRST before any API call)
DEERFLOW_URL="${DEERFLOW_URL:-http://localhost:2026}" DEERFLOW_GATEWAY_URL="${DEERFLOW_GATEWAY_URL:-$DEERFLOW_URL}" DEERFLOW_LANGGRAPH_URL="${DEERFLOW_LANGGRAPH_URL:-$DEERFLOW_URL/api/langgraph}"
Available Operations
- Health Check
Verify DeerFlow is running:
curl -s "$DEERFLOW_GATEWAY_URL/health"
- Send a Message (Streaming)
This is the primary operation. It creates a thread and streams the agent's response.
Step 1: Create a thread
curl -s -X POST "$DEERFLOW_LANGGRAPH_URL/threads"
-H "Content-Type: application/json"
-d '{}'
Response: {"thread_id": "<uuid>", ...}
Step 2: Stream a run
curl -s -N -X POST "$DEERFLOW_LANGGRAPH_URL/threads/<thread_id>/runs/stream"
-H "Content-Type: application/json"
-d '{
"assistant_id": "lead_agent",
"input": {
"messages": [
{
"type": "human",
"content": [{"type": "text", "text": "YOUR MESSAGE HERE"}]
}
]
},
"stream_mode": ["values", "messages-tuple"],
"stream_subgraphs": true,
"config": {
"recursion_limit": 1000
},
"context": {
"thinking_enabled": true,
"is_plan_mode": true,
"subagent_enabled": true,
"thread_id": "<thread_id>"
}
}'
The response is an SSE stream. Each event has the format:
event: <event_type> data: <json_data>
Key event types:
-
metadata — run metadata including run_id
-
values — full state snapshot with messages array
-
messages-tuple — incremental message updates (AI text chunks, tool calls, tool results)
-
end — stream is complete
Context modes (set via context ):
-
Flash mode: thinking_enabled: false, is_plan_mode: false, subagent_enabled: false
-
Standard mode: thinking_enabled: true, is_plan_mode: false, subagent_enabled: false
-
Pro mode: thinking_enabled: true, is_plan_mode: true, subagent_enabled: false
-
Ultra mode: thinking_enabled: true, is_plan_mode: true, subagent_enabled: true
- Continue a Conversation
To send follow-up messages, reuse the same thread_id from step 2 and POST another run with the new message.
- List Models
curl -s "$DEERFLOW_GATEWAY_URL/api/models"
Returns: {"models": [{"name": "...", "provider": "...", ...}, ...]}
- List Skills
curl -s "$DEERFLOW_GATEWAY_URL/api/skills"
Returns: {"skills": [{"name": "...", "enabled": true, ...}, ...]}
- Enable/Disable a Skill
curl -s -X PUT "$DEERFLOW_GATEWAY_URL/api/skills/<skill_name>"
-H "Content-Type: application/json"
-d '{"enabled": true}'
- List Agents
curl -s "$DEERFLOW_GATEWAY_URL/api/agents"
Returns: {"agents": [{"name": "...", ...}, ...]}
- Get Memory
curl -s "$DEERFLOW_GATEWAY_URL/api/memory"
Returns user context, facts, and conversation history summaries.
- Upload Files to a Thread
curl -s -X POST "$DEERFLOW_GATEWAY_URL/api/threads/<thread_id>/uploads"
-F "files=@/path/to/file.pdf"
Supports PDF, PPTX, XLSX, DOCX — automatically converts to Markdown.
- List Uploaded Files
curl -s "$DEERFLOW_GATEWAY_URL/api/threads/<thread_id>/uploads/list"
- Get Thread History
curl -s "$DEERFLOW_LANGGRAPH_URL/threads/<thread_id>/history"
- List Threads
curl -s -X POST "$DEERFLOW_LANGGRAPH_URL/threads/search"
-H "Content-Type: application/json"
-d '{"limit": 20, "sort_by": "updated_at", "sort_order": "desc"}'
Usage Script
For sending messages and collecting the full response, use the helper script:
bash /path/to/skills/claude-to-deerflow/scripts/chat.sh "Your question here"
See scripts/chat.sh for the implementation. The script:
-
Checks health
-
Creates a thread
-
Streams the run and collects the final AI response
-
Prints the result
Parsing SSE Output
The stream returns SSE events. To extract the final AI response from a values event:
-
Look for the last event: values block
-
Parse its data JSON
-
The messages array contains all messages; the last one with type: "ai" is the response
-
The content field of that message is the AI's text reply
Error Handling
-
If health check fails, DeerFlow is not running. Inform the user they need to start it.
-
If the stream returns an error event, extract and display the error message.
-
Common issues: port not open, services still starting up, config errors.
Tips
-
For quick questions, use flash mode (fastest, no planning).
-
For research tasks, use pro or ultra mode (enables planning and sub-agents).
-
You can upload files first, then reference them in your message.
-
Thread IDs persist — you can return to a conversation later.