local-document-ai-openvino

Parse local PDFs and document images with PaddleOCR-VL or PaddleOCR-VL-1.5 on OpenVINO, then route the structured parse into downstream document-to-data or document-to-code workflows. Use when a user wants private/local document understanding on Intel hardware, layout-preserving OCR, invoice or table extraction, structured JSON/Markdown output, React or HTML scaffolds, or Jupyter notebook generation from screenshots, diagrams, forms, reports, and invoices.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "local-document-ai-openvino" with this command: npx skills add zhuo-yoyowz/local-document-ai-openvino

Local Document AI with OpenVINO

Use this skill as a local document-to-action pipeline:

  1. Parse the document into a canonical structured representation.
  2. Optionally continue into to-data or to-code.
  3. Save outputs into a predictable artifact folder with traceability.

Read only if needed

Load these references when you need the schema or output contracts:

  • {baseDir}/references/schema.md
  • {baseDir}/references/mode_guide.md
  • {baseDir}/references/output_contracts.md

Primary entrypoints

Use exactly one of these entrypoints:

  • CLI orchestrator: {baseDir}/scripts/run_skill.py
  • Optional local demo UI: {baseDir}/scripts/serve_skill_ui.py

Do not call these implementation scripts directly from the skill:

  • parse_document.py
  • transform_doc_to_data.py
  • transform_doc_to_code.py

Local readiness

Check the environment before processing real documents:

python "{baseDir}/scripts/check_env.py"

Install the base dependencies in a virtual environment:

python -m pip install -r "{baseDir}/requirements.txt"

Install the third-party paddleocr_vl_openvino package only after reviewing the source or wheel and only when you intend to run the real OCR pipeline. Prefer installing from a reviewed local wheel path inside a virtual environment.

Run a quick orchestration smoke test:

python "{baseDir}/scripts/smoke_test.py"

Model assets are discovered from:

  • PADDLEOCR_VL_OPENVINO_MODEL_DIR
  • PADDLEOCR_VL_LAYOUT_MODEL_DIR plus PADDLEOCR_VL_VLM_MODEL_DIR
  • {baseDir}/models/paddleocr-vl-1.5-openvino/
  • {baseDir}/models/paddleocr-vl-openvino/

Allow model auto-download only when the user explicitly approves it.

Supported modes

parse

Use when the user wants the structured parse only.

Outputs:

  • parsed.json
  • parsed.md
  • result_report.html
  • extracted layout, tables, or figures when available

to-data

Use when the user wants structured extraction, normalization, or document classification.

Typical outputs under task_output/:

  • entities.json
  • kv_pairs.json
  • table_index.json
  • normalized.json
  • structured_record.json
  • traceability.json

to-code

Use when the user wants implementation-oriented output from the parse result.

Supported targets:

  • react
  • html-css
  • json-schema
  • jupyter-notebook

Typical outputs under task_output/:

  • component_map.json
  • field_schema.json
  • ui_blueprint.json
  • notes.md
  • traceability.json
  • target-specific artifacts such as app.jsx, index.html, styles.css, schema.json, notebook.ipynb, or notebook_plan.json

Treat all generated code and notebooks as drafts. Review them before running, publishing, or connecting them to real systems.

Pipeline rules

Always follow these rules:

  1. Prefer local execution.
  2. Always parse first into parsed.json.
  3. Generate downstream artifacts from parsed.json, not raw OCR text alone.
  4. Preserve page numbers, reading order, block types, and source anchors when possible.
  5. Write traceability for downstream outputs.
  6. Mark low-confidence regions or assumptions explicitly.
  7. Do not silently drop tables, figures, formulas, charts, or key-value regions.
  8. Save outputs into one artifact folder per run.
  9. For confidential documents, prefer an explicit private --out directory and remove artifacts after review.

Output contract

Default output folder:

./artifacts/<document_stem>/

Expected top-level outputs:

  • effective_config.json
  • run_report.json
  • parsed.json
  • parsed.md
  • result_report.html
  • task_output/

to-code runs may also emit:

  • code_preview.html

CLI examples

Parse

python "{baseDir}/scripts/run_skill.py" \
  --mode parse \
  --file "/absolute/path/to/report.pdf" \
  --out "/absolute/path/to/artifacts/report_parse"

To-data

python "{baseDir}/scripts/run_skill.py" \
  --mode to-data \
  --file "/absolute/path/to/invoice.pdf" \
  --out "/absolute/path/to/artifacts/invoice_data" \
  --extract "tables,entities,kv_pairs"

To-code

python "{baseDir}/scripts/run_skill.py" \
  --mode to-code \
  --file "/absolute/path/to/ui_mockup.png" \
  --out "/absolute/path/to/artifacts/ui_code" \
  --target "react" \
  --title "Generated App"

To-code notebook target

python "{baseDir}/scripts/run_skill.py" \
  --mode to-code \
  --file "/absolute/path/to/architecture_diagram.png" \
  --out "/absolute/path/to/artifacts/notebook_code" \
  --target "jupyter-notebook" \
  --title "OpenVINO Notebook"

Slash-command examples

/skill local-document-ai-openvino parse file=./docs/report.pdf
/skill local-document-ai-openvino to-data file=./docs/invoice.pdf extract=tables,entities,kv_pairs
/skill local-document-ai-openvino to-code file=./mockups/architecture.png target=jupyter-notebook

Optional local demo UI

Start the local UI when the user wants an interactive demo page:

python "{baseDir}/scripts/serve_skill_ui.py"

The UI lets the user:

  • preview a local file
  • choose parse, to-data, or to-code
  • choose the to-code target
  • run the pipeline and inspect the generated local HTML reports

The bundled UI only allows preview/run access for local files under the skill directory and common user content folders such as Downloads, Documents, Desktop, and Pictures.

Failure behavior

If a run fails:

  • state which stage failed
  • do not claim outputs were created if they were not
  • prefer writing error.json with failure details
  • recommend parse first when the downstream request is ambiguous
  • surface stderr or a concise failure summary when available

Safety notes

  • Use a virtual environment for dependency installation.
  • Review and approve model downloads only when you explicitly intend to.
  • Keep outputs in a private local folder when documents are sensitive.
  • Review generated code and notebooks before execution.
  • Delete artifacts when they are no longer needed.
  • The wrapper always uses the bundled local scripts and the current Python interpreter. It does not allow custom interpreter or script-directory overrides.

Short reminder

Present this skill as a local document-understanding workflow with downstream actions, not as a plain OCR wrapper.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Fitness & Workout Manager

运动健康管家 Pro — 专为 Open Claw + 飞书生态打造的 AI 健身管家。双模式打卡(事后总结 / 边练边记)、3 层 Session 防丢失、飞书云文档同步、极致 Token 优化。Route to this skill when the user mentions workouts, logs...

Registry SourceRecently Updated
4080Profile unavailable
General

Fly Flight

Query China domestic transport options through one skill. Use when a user wants domestic flight or high-speed rail results, departure and arrival times, stat...

Registry SourceRecently Updated
4360Profile unavailable
General

Emissor de Nota Fiscal Paulistana

Faturamento NFS-e SP (Emissão e Cancelamento de Notas Fiscais em São Paulo)

Registry SourceRecently Updated
3820Profile unavailable
General

feishu-edge-tts-win

飞书语音消息发送技能(Windows 版)。使用 Edge TTS(微软,免费)生成语音并以飞书语音气泡发送。

Registry SourceRecently Updated
3760Profile unavailable