openviking-setup

Set up OpenViking context database for OpenClaw agents. OpenViking is an open-source context database designed specifically for AI agents with filesystem-based memory management, tiered context loading (L0/L1/L2), and self-evolving memory. Use when asked to set up OpenViking, configure context database for agents, implement persistent memory, or when memory management optimization is needed. Triggers on "install openviking", "setup openviking", "context database", "tiered memory", "L0 L1 L2 context".

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "openviking-setup" with this command: npx skills add engsathiago/openviking-setup

OpenViking Setup for OpenClaw

OpenViking brings filesystem-based memory management to AI agents with tiered context loading and self-evolving memory. This skill guides you through installation and configuration.

What OpenViking Provides

  • Filesystem paradigm: Unified context management (memories, resources, skills)
  • Tiered loading (L0/L1/L2): Load only what's needed, save tokens
  • Self-evolving memory: Gets smarter with use
  • OpenClaw plugin: Native integration available

Prerequisites

  • Python 3.10+
  • Go 1.22+ (for AGFS components)
  • GCC 9+ or Clang 11+ (for core extensions)
  • VLM model access (for image/content understanding)
  • Embedding model access (for vectorization)

Quick Start

Step 1: Install OpenViking

# Python package
pip install openviking --upgrade --force-reinstall

# CLI tool
curl -fsSL https://raw.githubusercontent.com/volcengine/OpenViking/main/crates/ov_cli/install.sh | bash

Step 2: Create Configuration

Create ~/.openviking/ov.conf:

{
  "storage": {
    "workspace": "/home/your-name/openviking_workspace"
  },
  "log": {
    "level": "INFO",
    "output": "stdout"
  },
  "embedding": {
    "dense": {
      "api_base": "https://api.openai.com/v1",
      "api_key": "your-openai-api-key",
      "provider": "openai",
      "dimension": 1536,
      "model": "text-embedding-3-small"
    },
    "max_concurrent": 10
  },
  "vlm": {
    "api_base": "https://api.openai.com/v1",
    "api_key": "your-openai-api-key",
    "provider": "openai",
    "model": "gpt-4o",
    "max_concurrent": 100
  }
}

Step 3: Configure Provider

OpenViking supports multiple VLM providers:

ProviderModel ExampleNotes
openaigpt-4oOfficial OpenAI API
volcenginedoubao-seed-2-0-proVolcengine Doubao
litellmclaude-3-5-sonnetUnified access (Anthropic, DeepSeek, Gemini, etc.)

For LiteLLM (recommended for flexibility):

{
  "vlm": {
    "provider": "litellm",
    "model": "claude-3-5-sonnet-20241022",
    "api_key": "your-anthropic-key"
  }
}

For Ollama (local models):

{
  "vlm": {
    "provider": "litellm",
    "model": "ollama/llama3.1",
    "api_base": "http://localhost:11434"
  }
}

OpenClaw Integration

Plugin Installation

OpenViking has a native OpenClaw plugin for seamless integration:

# Install OpenClaw plugin
pip install openviking-openclaw

# Or from source
git clone https://github.com/volcengine/OpenViking
cd OpenViking/plugins/openclaw
pip install -e .

Configuration for OpenClaw

Add to your OpenClaw config:

# ~/.openclaw/config.yaml
memory:
  provider: openviking
  config:
    workspace: ~/.openviking/workspace
    tiers:
      l0:
        max_tokens: 4000
        auto_flush: true
      l1:
        max_tokens: 16000
        compression: true
      l2:
        max_tokens: 100000
        archive: true

Memory Tiers Explained

TierPurposeToken BudgetBehavior
L0Active working memory4K tokensAlways loaded, fast access
L1Frequently accessed16K tokensCompressed, on-demand
L2Archive/cold storage100K+ tokensSemantic search only

How Tiers Work

  1. New context goes to L0
  2. L0 fills → oldest items compressed to L1
  3. L1 fills → oldest items archived to L2
  4. Retrieval searches all tiers, returns relevant context

Directory Structure

~/.openviking/
├── ov.conf                 # Configuration
└── workspace/
    ├── memories/
    │   ├── sessions/        # L0: Active session memory
    │   ├── compressed/     # L1: Compressed memories
    │   └── archive/        # L2: Long-term storage
    ├── resources/          # Files, documents, assets
    └── skills/             # Skill-specific context

Usage Patterns

Adding Memory

from openviking import MemoryStore

store = MemoryStore()

# Add to L0
store.add_memory(
    content="User prefers Portuguese language responses",
    metadata={"tier": "l0", "category": "preference"}
)

# Add resource
store.add_resource(
    path="project_spec.md",
    content=open("project_spec.md").read()
)

Retrieving Context

# Semantic search across all tiers
results = store.search(
    query="user preferences",
    tiers=["l0", "l1", "l2"],
    limit=10
)

# Directory-based retrieval (more precise)
results = store.retrieve(
    path="memories/sessions/2026-03-16/",
    recursive=True
)

Compaction

# Trigger manual compaction
store.compact()

# View compaction status
status = store.status()
print(f"L0: {status.l0_tokens}/{status.l0_max}")
print(f"L1: {status.l1_tokens}/{status.l1_max}")

Best Practices

Memory Hygiene

  1. Categorize entries: Use metadata tags for better retrieval
  2. Flush L0 regularly: Let compaction run, don't hoard
  3. Use directory structure: Organize by project/topic
  4. Review L2 periodically: Archive stale memories

Token Efficiency

  1. Let OpenViking manage tiers automatically
  2. Use semantic search for L2 (don't load entire archive)
  3. Compress verbose content before adding to L1
  4. Keep L0 under 50% capacity for best performance

OpenClaw Workflow

  1. Session starts → OpenViking loads L0
  2. Conversation proceeds → context auto-promoted to L1/L2
  3. Long gaps → L2 provides relevant historical context
  4. Sessions compound → agent gets smarter over time

Troubleshooting

Common Issues

"No module named 'openviking'"

  • Ensure Python 3.10+ is active
  • Try pip install --user openviking

"Embedding model not found"

  • Check ov.conf has correct provider and model
  • Verify API key is valid

"L0 overflow"

  • Reduce l0.max_tokens in config
  • Manually call store.compact()

"Slow retrieval from L2"

  • Consider pre-loading frequently accessed resources to L1
  • Use directory-based retrieval for better precision

Resources

What Gets Better

After setup, your agent gains:

  1. Persistent memory across sessions
  2. Smarter retrieval with semantic + directory search
  3. Token efficiency with tiered loading
  4. Self-improvement as context accumulates
  5. Observable context with retrieval trajectories

The more your agent works, the more context it retains—without token bloat.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

Agent Memory Persistent Workspace Memory System

Stop your AI agent from forgetting everything between sessions. Three-tier memory architecture (long-term owner namespace / daily logs / session handoff), cr...

Registry SourceRecently Updated
1000Profile unavailable
Automation

Memory Transfer

Transfer memory files from one OpenClaw agent to another. Use when you need to migrate memory, configuration, or context from one agent to another agent.

Registry SourceRecently Updated
2000Profile unavailable
Automation

OpenClaw Memory Resilience

Configure OpenClaw agent memory to survive compaction and session restarts. Use when: (1) setting up a new OpenClaw agent or workspace, (2) agents are forget...

Registry SourceRecently Updated
1230Profile unavailable
Automation

Correction Memory

Makes agent corrections persistent and reusable. When you override, reject, or correct an agent's output, this skill logs the correction and automatically in...

Registry SourceRecently Updated
2540Profile unavailable