DeepSeek OCR
Recognize text in images using the DeepSeek-OCR model.
Quick start
{baseDir}/scripts/ocr.sh /path/to/image.jpg
Usage
{baseDir}/scripts/ocr.sh <image_path> [output_format]
Parameters:
<image_path>: Local image file (jpg, png, webp, gif, bmp)[output_format]: Optional, defaults tomarkdown. Can betext,json, etc.
Examples
# Convert to markdown (default)
{baseDir}/scripts/ocr.sh /path/to/image.jpg
# Convert to plain text
{baseDir}/scripts/ocr.sh /path/to/image.png text
# Extract table as JSON
{baseDir}/scripts/ocr.sh /path/to/table.jpg "extract table as json"
Remote URL images
The model only supports base64-encoded images. For remote URLs, download first:
curl -s -o /tmp/image.jpg "https://example.com/image.jpg"
{baseDir}/scripts/ocr.sh /tmp/image.jpg
API key
Set DEEPSEEK_OCR_API_KEY, or configure in ~/.openclaw/openclaw.json:
{
skills: {
"deepseek-ocr": {
apiKey: "YOUR_KEY_HERE",
},
},
}
Default API URL: https://api.modelverse.cn/v1/chat/completions
Override with DEEPSEEK_OCR_API_URL if needed.