NanoGPT Skill
Use this skill when the task should run through the NanoGPT API from a local terminal environment. NanoGPT’s official docs describe it as an API for text, image, and video generation, with text generation generally matching OpenAI standards. This repository is the local CLI and skill wrapper for that API, not a generic prompt helper. Prefer the bundled wrapper scripts in scripts/ so OpenClaw and direct CLI usage share the same behavior.
Official docs: https://docs.nano-gpt.com/
Prerequisite check
Before invoking the skill, ensure the CLI is available:
./scripts/models.sh --json
If that fails because the local CLI is not built yet:
npm install
npm run build
If the repo is not present locally, install the published CLI instead:
npm install -g nano-gpt-cli
Authentication is token-based. This skill requires a NanoGPT API token. Set the token in NANO_GPT_API_KEY:
export NANO_GPT_API_KEY=YOUR_NANO_GPT_TOKEN
Or configure it once and store it in the local nano-gpt-cli user config:
nano-gpt config set api-key YOUR_API_KEY
Optional environment overrides:
export NANO_GPT_MODEL=moonshotai/kimi-k2.5
export NANO_GPT_IMAGE_MODEL=qwen-image
export NANO_GPT_VIDEO_MODEL=kling-video-v2
export NANO_GPT_BASE_URL=https://nano-gpt.com
export NANO_GPT_OUTPUT_FORMAT=text
Quick start
Text prompt:
./scripts/prompt.sh "Summarize the latest build logs."
Streaming multimodal prompt:
./scripts/prompt.sh "Describe this image." --image ./assets/example.png
Interactive chat:
./scripts/chat.sh
Image generation:
./scripts/image.sh "A cinematic product shot of a silver mechanical keyboard" --output output/keyboard.png
Image-to-image generation:
./scripts/image.sh "Turn this product photo into a watercolor ad" --image ./assets/product.png --output output/product-watercolor.png
Video generation:
./scripts/video.sh "A cinematic drone flyover of a neon coastal city at dusk" --duration 5 --output output/neon-city.mp4
Workflow
- Use
scripts/prompt.shfor one-shot text or vision prompts. - Use
scripts/chat.shfor iterative back-and-forth. - Use
scripts/image.shfor text-to-image or image-to-image generation. - Use
scripts/video.shfor text-to-video or image-to-video generation. - Use
nano-gpt video-status REQUEST_IDwhen a video run is asynchronous and needs a later status check. - Use
scripts/models.sh --jsonwhen model discovery matters. - Prefer flags over editing scripts. The wrappers should stay thin.
References
Open only what you need:
- Command reference:
references/cli.md - Common OpenClaw workflows:
references/workflows.md
Guardrails
- Prefer the wrapper scripts over calling NanoGPT HTTP APIs directly.
- Only use this skill when the user wants to call the NanoGPT API.
- Keep secrets out of prompts and logs; use config or env vars for API keys.
- Only upload local images or videos when the user explicitly provides the path or clearly asks to use that specific file.
- Do not search the filesystem for media to upload.
- Treat local
--imageand--videoinputs as remote-upload actions. Do not send sensitive screenshots, exports, documents, or recordings unless the user explicitly requests it. - Prompts and any provided media are sent to the configured NanoGPT API endpoint, which defaults to
https://nano-gpt.com. - Use
--jsonwhen another tool or agent will parse the output. - Use
--outputonscripts/image.shwhen a file artifact is required. - Use
--outputonscripts/video.shwhen the final MP4 should be written locally.