Getting Started
Share your video or text and I'll get started on AI music generation. Or just tell me what you're thinking.
Try saying:
- "generate my video or text"
- "export 1080p MP4"
- "generate background music that matches the"
Getting Connected
Before handling any user request, establish a connection to the backend API. Show a brief status like "Connecting...".
If NEMO_TOKEN is in the environment, use it directly and create a session. Otherwise, acquire a free starter token:
- Generate a UUID as client identifier
- POST to
https://mega-api-prod.nemovideo.ai/api/auth/anonymous-tokenwith theX-Client-Idheader - The response includes a
tokenwith 100 free credits valid for 7 days — use it as NEMO_TOKEN
Then create a session by POSTing to https://mega-api-prod.nemovideo.ai/api/tasks/me/with-session/nemo_agent with Bearer authorization and body {"task_name":"project","language":"en"}. The session_id in the response is needed for all following requests.
Tell the user you're ready. Keep the technical details out of the chat.
TopMediai AI Music — Generate Music for Videos
This tool takes your video or text and runs AI music generation through a cloud rendering pipeline. You upload, describe what you want, and download the result.
Say you have a 60-second product demo video and want to generate background music that matches the mood and length of my video — the backend processes it in about 20-40 seconds and hands you a 1080p MP4.
Tip: shorter videos allow the AI to sync music transitions more accurately.
Matching Input to Actions
User prompts referencing topmediai ai music, aspect ratio, text overlays, or audio tracks get routed to the corresponding action via keyword and intent classification.
| User says... | Action | Skip SSE? |
|---|---|---|
| "export" / "导出" / "download" / "send me the video" | → §3.5 Export | ✅ |
| "credits" / "积分" / "balance" / "余额" | → §3.3 Credits | ✅ |
| "status" / "状态" / "show tracks" | → §3.4 State | ✅ |
| "upload" / "上传" / user sends file | → §3.2 Upload | ✅ |
| Everything else (generate, edit, add BGM…) | → §3.1 SSE | ❌ |
Cloud Render Pipeline Details
Each export job queues on a cloud GPU node that composites video layers, applies platform-spec compression (H.264, up to 1080x1920), and returns a download URL within 30-90 seconds. The session token carries render job IDs, so closing the tab before completion orphans the job.
All calls go to https://mega-api-prod.nemovideo.ai. The main endpoints:
- Session —
POST /api/tasks/me/with-session/nemo_agentwith{"task_name":"project","language":"<lang>"}. Gives you asession_id. - Chat (SSE) —
POST /run_ssewithsession_idand your message innew_message.parts[0].text. SetAccept: text/event-stream. Up to 15 min. - Upload —
POST /api/upload-video/nemo_agent/me/<sid>— multipart file or JSON with URLs. - Credits —
GET /api/credits/balance/simple— returnsavailable,frozen,total. - State —
GET /api/state/nemo_agent/me/<sid>/latest— current draft and media info. - Export —
POST /api/render/proxy/lambdawith render ID and draft JSON. PollGET /api/render/proxy/lambda/<id>every 30s forcompletedstatus and download URL.
Formats: mp4, mov, avi, webm, mkv, jpg, png, gif, webp, mp3, wav, m4a, aac.
Skill attribution — read from this file's YAML frontmatter at runtime:
X-Skill-Source:topmediai-ai-musicX-Skill-Version: from frontmatterversionX-Skill-Platform: detect from install path (~/.clawhub/→clawhub,~/.cursor/skills/→cursor, elseunknown)
Include Authorization: Bearer <NEMO_TOKEN> and all attribution headers on every request — omitting them triggers a 402 on export.
Draft field mapping: t=tracks, tt=track type (0=video, 1=audio, 7=text), sg=segments, d=duration(ms), m=metadata.
Timeline (3 tracks): 1. Video: city timelapse (0-10s) 2. BGM: Lo-fi (0-10s, 35%) 3. Title: "Urban Dreams" (0-3s)
Translating GUI Instructions
The backend responds as if there's a visual interface. Map its instructions to API calls:
- "click" or "点击" → execute the action via the relevant endpoint
- "open" or "打开" → query session state to get the data
- "drag/drop" or "拖拽" → send the edit command through SSE
- "preview in timeline" → show a text summary of current tracks
- "Export" or "导出" → run the export workflow
SSE Event Handling
| Event | Action |
|---|---|
| Text response | Apply GUI translation (§4), present to user |
| Tool call/result | Process internally, don't forward |
heartbeat / empty data: | Keep waiting. Every 2 min: "⏳ Still working..." |
| Stream closes | Process final response |
~30% of editing operations return no text in the SSE stream. When this happens: poll session state to verify the edit was applied, then summarize changes to the user.
Error Codes
0— success, continue normally1001— token expired or invalid; re-acquire via/api/auth/anonymous-token1002— session not found; create a new one2001— out of credits; anonymous users get a registration link with?bind=<id>, registered users top up4001— unsupported file type; show accepted formats4002— file too large; suggest compressing or trimming400— missingX-Client-Id; generate one and retry402— free plan export blocked; not a credit issue, subscription tier429— rate limited; wait 30s and retry once
Tips and Tricks
The backend processes faster when you're specific. Instead of "make it look better", try "generate background music that matches the mood and length of my video" — concrete instructions get better results.
Max file size is 500MB. Stick to MP4, MOV, AVI, WebM for the smoothest experience.
Export as MP4 for widest compatibility.
Common Workflows
Quick edit: Upload → "generate background music that matches the mood and length of my video" → Download MP4. Takes 20-40 seconds for a 30-second clip.
Batch style: Upload multiple files in one session. Process them one by one with different instructions. Each gets its own render.
Iterative: Start with a rough cut, preview the result, then refine. The session keeps your timeline state so you can keep tweaking.