Fast Unified Memory

# Skill: Fast Unified Memory

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "Fast Unified Memory" with this command: npx skills add broedkrummen/fast-unified-memory

Skill: Fast Unified Memory

A high-performance unified memory system that integrates OpenClaw memory with semantic memory storage using Ollama's nomic-embed-text model for ultra-fast embeddings.

Overview

This skill provides a unified memory layer that combines:

  • OpenClaw Memory: Standard file-based memory storage
  • Semantic Memory: Vector-based memory using Ollama embeddings

Features

  • Ultra-fast: ~130ms for combined search (embedding ~40ms + search ~90ms)
  • 🔒 Private: All processing done locally via Ollama
  • 💰 Free: No API costs - uses local Ollama instance
  • 🧠 Semantic: Uses nomic-embed-text for intelligent similarity matching

Requirements

  • Ollama installed and running
  • nomic-embed-text model pulled: ollama pull nomic-embed-text

Installation

# Install Ollama first
curl -fsSL https://ollama.ai/install.sh | sh

# Pull the embedding model
ollama pull nomic-embed-text

# Start Ollama
ollama serve

Usage

Commands

# Search both memory systems
node fast-unified-memory.js search "your query"

# Add a memory
node fast-unified-memory.js add "User prefers concise responses"

# List all memories
node fast-unified-memory.js list

# Show system stats
node fast-unified-memory.js stats

Architecture

┌─────────────────────────────────────────────┐
│           FAST UNIFIED MEMORY                │
│                                             │
│  ┌─────────────┐    ┌─────────────┐        │
│  │   OpenClaw  │    │   Semantic  │        │
│  │   Memory    │    │   Memory    │        │
│  │ (files)     │    │  (vectors) │        │
│  └─────────────┘    └─────────────┘        │
│           ↓                  ↓              │
│    [Keyword Match]   [Cosine Similarity]   │
│                                             │
│        Unified Results (ranked)             │
└─────────────────────────────────────────────┘

Performance

MetricValue
Embedding generation~40ms
Vector search~50ms
File search~40ms
Total search~130ms

Configuration

The skill uses these defaults:

  • Ollama URL: http://localhost:11434
  • Embedding model: nomic-embed-text
  • Memory storage: ~/.mem0/fast-store.json
  • OpenClaw memory: ~/.openclaw/workspace/memory/

Files

  • fast-unified-memory.js - Main CLI tool
  • SKILL.md - This documentation

Troubleshooting

Ollama not running:

ollama serve

Model not found:

ollama pull nomic-embed-text

Port conflict: The skill assumes Ollama is on port 11434. Update the OLLAMA_URL constant if using a different port.

License

MIT

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Memory Tree

🌳 记忆树 — 周报自动生成,永久记忆标记,关键词搜索。说句话就能用。

Registry SourceRecently Updated
2480Profile unavailable
General

River Memory

Store and semantically search text memories locally using Ollama with automatic management and optimization.

Registry SourceRecently Updated
3610Profile unavailable
General

Mac AI Optimizer

Optimize macOS for AI workloads (OpenClaw, Docker, Ollama). Turn an 8GB Mac into a lean AI server node with near-16GB performance by disabling background ser...

Registry SourceRecently Updated
4140Profile unavailable
Automation

一步完成进化

Use when you need to stand up or standardize a fresh OpenClaw setup as the Fire Dragon Fruit Architecture: one strong main, one isolated rescue, layered file...

Registry SourceRecently Updated
2880Profile unavailable