literag

Local retrieval skill for large documentation corpora using independent SQLite knowledge libraries with keyword plus vector hybrid search. Use when searching Blender manuals, API references, SDK docs, framework docs, product docs, blog/article archives, exported markdown doc sets, or any other large external documentation that should not live in OpenClaw's main memory index. Also use when indexing, reindexing, debugging retrieval quality, checking index compatibility/status, or inspecting LiteRAG sqlite metadata. Usage: /literag search <library> <query> | /literag inspect <library> <path> [--start N --end N] | /literag index <library> | /literag status <library> | /literag meta <library> | /literag benchmark <library> --query ...

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "literag" with this command: npx skills add mozi1924/literag

LiteRAG

Use this skill when the target corpus is too large or too noisy for main agent memory.

Install

Packaged dependency install:

python3 -m pip install -r {baseDir}/requirements.txt

Layout

  • Config + databases live under <workspace>/.literag/
  • Main config: <workspace>/.literag/knowledge-libs.json
  • Default workspace resolution order: OPENCLAW_WORKSPACEWORKSPACE → walk upward from the current path until the OpenClaw workspace sentinel files are found
  • Core scripts live under skills/literag/scripts/
  • Skill bin entrypoint: skills/literag/bin/literag
  • Workspace convenience wrappers live at scripts/literag-query.py, scripts/literag-index.py, scripts/literag-status.py, scripts/literag-meta.py, and scripts/lq

Rules

  • Keep personal/work memory in OpenClaw builtin memory
  • Keep large external corpora in LiteRAG, not memory_search
  • Treat each knowledge base as an independent library with its own SQLite
  • Search first, inspect second
  • Prefer grouped document hits over raw chunk spam
  • Prefer source-relative paths when citing files back to the user
  • Use local OpenAI-compatible embeddings by default unless explicitly changed in config

Read these files when needed

  • Always read <workspace>/.literag/knowledge-libs.json when targeting a library or changing config
  • Read references/usage.md when you need command examples, output schema, or the intended search → inspect workflow
  • Read references/configuration.md when adding libraries, source roots, excludes, chunking overrides, or ranking overrides
  • Read references/agent-prompts.md when another agent / ACP harness needs a ready-made LiteRAG prompt template
  • Read references/optimization-playbook.md when a specific library needs retrieval-quality tuning, ranking cleanup, or indexing-throughput tuning
  • Read scripts under skills/literag/scripts/ only when editing behavior or diagnosing bugs

Slash / user-invocable usage

When invoked as /literag ..., parse the remaining argument string as a subcommand.

Supported forms:

  • /literag search <library> <query>
  • /literag inspect <library> <path> [--start N --end N]
  • /literag index <library> [--limit-files N] [--embedding-batch-size N]
  • /literag index-all [--limit-files N] [--embedding-batch-size N]
  • /literag status <library>
  • /literag meta <library>
  • /literag benchmark <library> --query ...

If the user gives a natural-language request instead of a strict subcommand, translate it to the nearest supported operation instead of being pedantic.

Supported commands

  • index_library.py — index one library
  • index_all.py — index all configured libraries
  • search_library.py — grouped hybrid/fts/vector retrieval
  • inspect_result.py — expand a hit by file path + chunk range
  • status_library.py — show index health / compatibility / counts
  • meta_library.py — dump raw sqlite meta records
  • benchmark_library.py — benchmark hybrid/fts/vector latency + hit shape across fixed query sets
  • bin/literag — packaged CLI entrypoint for search / inspect / index / status / meta / benchmark
  • scripts/literag-query.py — query/search/inspect wrapper
  • scripts/literag-index.py — index wrapper for one library or all libraries
  • scripts/literag-status.py — status wrapper
  • scripts/literag-meta.py — meta wrapper
  • scripts/literag-benchmark.py — benchmark wrapper
  • scripts/lq — tiny shell alias for literag-query.py

Operating workflow

  1. Read <workspace>/.literag/knowledge-libs.json
  2. Resolve the target library
  3. Run search_library.py for grouped retrieval
  4. If needed, run inspect_result.py on the top hit or chosen range
  5. For quick operator use, prefer scripts/literag-query.py or scripts/lq
  6. Use scripts/literag-index.py when you need a short indexing entrypoint
  7. Use scripts/literag-status.py before debugging weird retrieval or after config changes
  8. Use scripts/literag-meta.py when you need the raw stored metadata
  9. Use scripts/literag-benchmark.py or skills/literag/scripts/benchmark_library.py when you need repeatable retrieval latency / hit-shape comparisons
  10. Keep LiteRAG separate from builtin memory unless the user explicitly wants a durable summary copied into workspace memory

Current intent

Use LiteRAG for:

  • Blender manual + Blender Python reference
  • Future blog/article/site knowledge bases
  • Any large external docs where hybrid retrieval is needed without polluting builtin memory

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Research

Batch Content Factory

Multi-platform content production line. Automates the entire workflow from topic research to content creation. Suitable for self-media operators producing hi...

Registry SourceRecently Updated
Research

Fund Analyzer Pro

[何时使用]当用户需要基金深度分析时;当用户说"分析这个基金""基金对比""基金诊断""基金经理分析"时;当检测到基金代码/基金名称/投顾策略时触发。整合天天基金 API+ 且慢 MCP,提供单一基金分析/基金比较/基金诊断/持仓诊断/基金经理/机会分析/投资方式/报告信号八大模块。新增信号监控提醒功能(sign...

Registry SourceRecently Updated
Research

FN Portrait Toolkit

Financial report footnote extraction and analysis tool for Chinese A-share listed companies. Use when: (1) User wants to extract financial note data from ann...

Registry SourceRecently Updated
Research

流式AI检索问答技能

通用流式AI检索问答技能 — 为任意行业应用提供四步流式分析交互界面。 触发场景:用户输入关键词 → AI自动执行:理解意图 → 检索知识库 → 流式生成 → 来源标记 → 完整回答。 当需要实现以下任意场景时激活: (1) AI搜索框 / 智能咨询组件重构 (2) 知识库问答(医疗/法律/金融/教育等垂直领域)...

Registry SourceRecently Updated