local-gmncode-vision-pro
Use this skill when basic single-image fallback is not enough and the task needs production-grade image understanding.
Core scripts
- Batch processing:
/home/ubuntu/.openclaw/workspace/skills/local-gmncode-vision-pro/scripts/vision_batch.py - Structured JSON output:
/home/ubuntu/.openclaw/workspace/skills/local-gmncode-vision-pro/scripts/vision_json.py
Workflow
- Prefer the built-in
imagetool if it is healthy and available. - If
imagefails or needs more control, use the Pro scripts. - For multi-image work, use
vision_batch.py. - For agent/tool pipelines, use
vision_json.pyto get machine-readable output. - If results are uncertain, say so explicitly and return best-effort ranked hypotheses.
Dependencies
- Environment variable:
GMNCODE_API_KEY - Model route:
gpt-5.4
Read when needed
Read this file for packaging, pricing, and promotion:
/home/ubuntu/.openclaw/workspace/skills/local-gmncode-vision-pro/references-go-to-market.md
Output principles
- Be explicit about uncertainty.
- Separate confirmed observations from inference.
- Prefer structured output for automation.
- Do not overclaim exact character identity when only style-level evidence exists.