zettabrain-rag

Chat with your private documents using a fully local RAG pipeline. No cloud, no API keys — runs on your own machine with Ollama + ChromaDB.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "zettabrain-rag" with this command: npx skills add zettabrain/zettabrain-rag

ZettaBrain RAG Skill

Chat with your own documents using a local AI. Document data stays on your machine when you use local storage and a local Ollama endpoint (the default). Remote storage (S3, NFS, SMB) and a remote OLLAMA_HOST are optional and will move data off-device — see the Privacy section.

Supports PDF, DOCX, TXT, Markdown. Works on Linux, macOS (including EC2 Mac Apple Silicon), and Windows.

Install

Recommended — pipx (no elevated privileges, fully inspectable)

pipx install zettabrain-rag
sudo zettabrain-setup

One-line installer (review source before running)

The installer scripts are open source and auditable at the links above before execution.

# Linux — review first: https://github.com/zettabrain/zettabrain-rag/blob/main/install.sh
curl -fsSL https://zettabrain.app/install.sh | sudo bash

# macOS — review first: https://github.com/zettabrain/zettabrain-rag/blob/main/install.sh
curl -fsSL https://zettabrain.app/install.sh | bash

# Windows — review first: https://github.com/zettabrain/zettabrain-rag/blob/main/install.ps1
irm https://zettabrain.app/install.ps1 | iex

The Linux installer requires sudo to install Ollama system-wide and register a systemd service. The macOS installer does not require sudo for the package install step.

Setup

Run the interactive setup wizard once after install:

sudo zettabrain-setup

This will:

  1. Configure your document storage (local, NFS, SMB, or S3)
  2. Install and start Ollama locally
  3. Pull the recommended AI model for your hardware
  4. Generate a self-signed TLS certificate (stays on-device)
  5. Register ZettaBrain as a background service (see Service Management to stop or remove it)

Commands

CommandDescription
zettabrain-chatInteractive CLI chat with your documents
zettabrain-serverStart the web GUI server (HTTPS on port 7860)
zettabrain-ingestIndex documents into the vector store
zettabrain-ingest --rebuildWipe and re-index all documents
zettabrain-statusShow Ollama, vector store, and storage status
zettabrain-storage addAdd an additional storage source
zettabrain-setupRe-run the setup wizard

Usage Examples

Chat via CLI:

zettabrain-chat
# > What does our Q3 report say about cloud costs?

Start the web GUI (https://localhost:7860):

zettabrain-server

Ingest a specific folder:

ZETTABRAIN_DOCS=/path/to/docs zettabrain-ingest

Vector Store — Location, Retention & Deletion

The vector index (document embeddings) is stored only on your local machine:

ItemLocation
Vector database/opt/zettabrain/src/zettabrain_vectorstore/
Ingestion log (MD5 hashes)/opt/zettabrain/src/ingested_files.json
Configuration/opt/zettabrain/src/zettabrain.env

Embeddings are never transmitted to any remote service. They are derived from your documents and stored locally in ChromaDB.

Delete the vector index:

# Via CLI
zettabrain-server &
curl -X DELETE http://localhost:7860/api/vectorstore

# Or directly
rm -rf /opt/zettabrain/src/zettabrain_vectorstore
rm -f  /opt/zettabrain/src/ingested_files.json

Rebuild from scratch:

zettabrain-ingest --rebuild

Exclude files or folders by not including them in ZETTABRAIN_DOCS — only files under that path are indexed.

Service Management

ZettaBrain registers a background service so the web GUI auto-starts on boot. Here is how to control or fully remove it:

Linux (systemd)

# Stop the service
sudo systemctl stop zettabrain

# Disable auto-start on boot
sudo systemctl disable zettabrain

# Check status
sudo systemctl status zettabrain

# View logs
journalctl -u zettabrain -f

# Remove service completely
sudo systemctl stop zettabrain
sudo systemctl disable zettabrain
sudo rm /etc/systemd/system/zettabrain.service
sudo systemctl daemon-reload

macOS (launchd)

# Stop the service
sudo launchctl unload /Library/LaunchDaemons/io.zettabrain.server.plist

# Remove auto-start on boot
sudo rm /Library/LaunchDaemons/io.zettabrain.server.plist

# View logs
tail -f /opt/zettabrain/logs/server.log

Uninstall completely

# Remove the package
pipx uninstall zettabrain-rag

# Stop and remove service (Linux)
sudo systemctl stop zettabrain && sudo systemctl disable zettabrain
sudo rm -f /etc/systemd/system/zettabrain.service && sudo systemctl daemon-reload

# Stop and remove service (macOS)
sudo launchctl unload /Library/LaunchDaemons/io.zettabrain.server.plist
sudo rm -f /Library/LaunchDaemons/io.zettabrain.server.plist

# Remove all data, config, and vector index
sudo rm -rf /opt/zettabrain

Privacy

Privacy depends on your configuration:

ConfigurationData stays local?
Local storage + OLLAMA_HOST=http://localhost:11434 (default)✅ Yes — fully on-device
NFS or SMB network storage⚠️ Documents fetched over your LAN
S3 / object storage⚠️ Documents streamed from cloud storage
Remote OLLAMA_HOST⚠️ Queries and retrieved document chunks sent to remote Ollama

Default setup is fully local. The setup wizard defaults to local storage and a localhost Ollama endpoint. Remote options are opt-in and clearly labelled during setup.

Document embeddings (vector index) are always stored locally regardless of storage configuration.

Configuration

Settings file: /opt/zettabrain/src/zettabrain.env

VariableDefaultDescription
ZETTABRAIN_DOCSset during setupPath to documents folder
ZETTABRAIN_LLM_MODELset during setupOllama model name
ZETTABRAIN_EMBED_MODELnomic-embed-textEmbedding model
OLLAMA_HOSThttp://localhost:11434Ollama API endpoint (keep local for full privacy)
ZETTABRAIN_CHUNK_SIZE1000Document chunk size
ZETTABRAIN_CHUNK_OVERLAP200Chunk overlap

Supported Platforms

PlatformNotes
Ubuntu 22.04 / 24.04Full GPU support (NVIDIA auto-installed)
Amazon Linux 2 / 2023Full support
RHEL / Rocky / AlmaLinux 8/9Full support
macOS 12+ Apple SiliconMetal GPU via Ollama (mac2.metal, mac2-m2.metal)
macOS 12+ IntelCPU inference (mac1.metal)
Windows 10/11Via PowerShell installer

Links

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Kitchen Sponge Hygiene Rotation Card

Create a conservative kitchen sponge hygiene and rotation card with color zones, daily drying, sanitizing options, raw-food cleanup cautions, replacement tri...

Registry SourceRecently Updated
General

Atoll Api

Interact with the Atoll project management API for managing tasks, projects, goals, KPIs, initiatives, milestones, comments, members, teams, labels, dependen...

Registry SourceRecently Updated
General

Chou Fei

Manage external resource fetching, extract valuable information, and allocate computing power based on task priority and load.

Registry SourceRecently Updated
General

Mail Stamp Supply Card

Create a small mail-station supply card for household or office mailing tasks, with stamp count, envelope sizes, blank return labels, pens, storage spot, min...

Registry SourceRecently Updated