luci-memory

Search personal video memory — media content (videos, images, keyframes, transcripts) and portrait data (traits, events, relationships, speeches). Use when the user asks about their videos, what happened, what was said, who they know, or their personality.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "luci-memory" with this command: npx skills add gimlettt/luci-memory

luci-memory

Setup

Requires an MEMORIES_AI_KEY. On first use, if no key is found, the script will error and ask for one.

When the user provides their key, save it to {baseDir}/.env:

MEMORIES_AI_KEY=sk-their-key-here

After that, everything just works — the key is loaded automatically from .env on every run.

Timezone

All timestamps in Luci-memory are stored and returned in UTC. Skill output labels them with " UTC" so this is unambiguous. The user's local timezone is in USER.md (e.g. Asia/Shanghai). You are responsible for converting in both directions:

  1. Reading results. When presenting captured_time to the user, convert from UTC to the user's local timezone. Never show raw UTC labels to the user.

  2. Writing filters. --after and --before are interpreted as UTC. If the user says relative dates like "yesterday" or "this morning", convert their local-time intent to a UTC range before passing the dates.

Example (user in Asia/Shanghai, UTC+8, asks "what did I do yesterday" on 2026-04-08):

  • Local intent: 2026-04-07 00:00 → 2026-04-08 00:00 (Asia/Shanghai)
  • UTC range to pass: --after 2026-04-06T16:00:00 --before 2026-04-07T16:00:00

If USER.md has no timezone and the user uses relative dates, ask them first.

Unified search across personal media and portrait data from the Luci-memory API.

The user's videos go through two processing pipelines that produce different data:

  • Media content (personal): video summaries, audio transcripts, visual transcripts, keyframes, images
  • People & knowledge (portrait): traits, events with participants, relationships, speeches attributed to speakers

When to use

  • User asks to find or search videos, images, or photos
  • User asks what was said or shown in a video
  • User asks to list recent videos or images
  • User asks about media at a specific location or time
  • User asks about traits, personality, hobbies, interests
  • User asks what events happened, or events involving specific people
  • User asks about relationships between people
  • User asks about what someone said
  • User mentions "luci memory" or wants to use their video memory

Choosing the right type

  • About content (what happened, what was said/shown, find media) → use media types (search_video, query_audio, etc.)
  • About people (who, traits, relationships, named individuals) → use portrait types (traits, events, speeches, etc.)
  • Ambiguous questions like "What happened with Alice last week?" → use both: portrait types to identify the person and events, media types to get detailed video content and transcripts.
  • Person name fallback: Portrait data only exists for people who have appeared in at least 5 videos AND been named by the user in the app. If a portrait query by person name returns no results, fall back to media types — search video summaries, audio transcripts, or visual transcripts for mentions of that name instead.

Relevance guidelines

  • There is no rerank process — retrieved results may contain items irrelevant to the user's actual intent.
  • Always verify relevance: after receiving results, check each item against the user's original query. Only present results that are relevant. Discard anything that doesn't match.
  • Refine and retry: if results seem off or too broad, retry with a more specific query, narrower date range, or additional filters. Do not just dump low-quality results to the user.
  • Ask the user: if the query is ambiguous or too vague to produce good results, ask the user for more specific conditions before searching. It is better to clarify than to return noise. Do this no more than 1 time.

No hallucination — ground every claim in retrieved data

  • Never fabricate what the user did, said, or experienced. Every detail in your answer must come from actual search results.
  • Multi-step retrieval: for questions like "what did I do and say at XXX", do NOT answer from a single broad search. Follow this pattern:
    1. Locate: search broadly (search_video, search_events) to find relevant video_ids or event_ids.
    2. Retrieve: once you have IDs, prefer query_audio / query_visual with --video-ids to get complete transcripts. You can also use search_audio / search_visual scoped to those video IDs to find specific moments — use both flexibly as needed.
  • Do not stuff keywords into search queries. Each semantic search query should be a short, coherent natural-language query, rather than stacking multiple possible words. You are encouraged to try different ones and query various times though.
  • If data is missing, say so. Do not fill gaps with plausible-sounding guesses. "I couldn't find transcript data for that video" is always better than making something up.

How to invoke

Note: --after / --before are UTC. Convert from the user's local timezone first (see Timezone section above).

Returning Images/Keyframes to User

When search results include signed URLs (keyframes, images), follow this pipeline to send them in chat:

  1. Download the signed URL to the workspace:
    curl -sL -o /path/to/workspace/image.jpg "<signed_url>"
    
2. Send via OpenClaw message CLI:
openclaw message send --channel <channel> --target <chat_id> --media /path/to/workspace/image.jpg --message "caption"
3. Cleanup the file after sending:
rm /path/to/workspace/image.jpg
⚠️ Signed URLs expire after ~1 hour. Download promptly.
⚠️ Do NOT use /tmp or paths outside the workspace — some tools block external paths.
⚠️ The image tool only analyzes images — it cannot send them to the user. Use openclaw message send --media instead.

# ============ Media content (personal) ============

# --- Video ---
bash {baseDir}/run.sh --query "cooking in kitchen" --type search_video
bash {baseDir}/run.sh --query "what did I do" --type search_video --location "Heze"
bash {baseDir}/run.sh --query "meeting" --type search_video --after 2025-12-01 --before 2026-01-01
bash {baseDir}/run.sh --type query_video
bash {baseDir}/run.sh --type query_video --location "Suzhou" --after 2025-12-01

# --- Image ---
bash {baseDir}/run.sh --query "sunset" --type search_image
bash {baseDir}/run.sh --query "food" --type search_image --location "Beijing"
bash {baseDir}/run.sh --type query_image

# --- Audio Transcripts (what was said) ---
bash {baseDir}/run.sh --query "talking about work" --type search_audio
bash {baseDir}/run.sh --query "budget" --type search_audio --video-ids VI123,VI456
bash {baseDir}/run.sh --type query_audio --video-ids VI123,VI456

# --- Visual Transcripts (what was shown) ---
bash {baseDir}/run.sh --query "walking in park" --type search_visual
bash {baseDir}/run.sh --type query_visual --video-ids VI123,VI456

# --- Keyframes ---
bash {baseDir}/run.sh --query "person waving" --type search_keyframe
bash {baseDir}/run.sh --type query_keyframe --video-ids VI123,VI456

# ============ People & knowledge (portrait) ============

# --- Traits ---
bash {baseDir}/run.sh --type traits
bash {baseDir}/run.sh --type traits --person "Alice"
bash {baseDir}/run.sh --query "outdoor activities" --type search_traits

# --- Events ---
bash {baseDir}/run.sh --type events
bash {baseDir}/run.sh --type events --person "Alice"
bash {baseDir}/run.sh --type events --person "Alice,Bob"
bash {baseDir}/run.sh --type events --after 2025-12-01 --before 2026-01-01
bash {baseDir}/run.sh --query "cooking in kitchen" --type search_events
bash {baseDir}/run.sh --query "meeting" --type search_events --person "Bob" --after 2025-12-01

# --- Relationships ---
bash {baseDir}/run.sh --type relationships
bash {baseDir}/run.sh --type relationships --person "Alice"

# --- Speeches ---
bash {baseDir}/run.sh --type speeches
bash {baseDir}/run.sh --type speeches --person "Alice"
bash {baseDir}/run.sh --type speeches --event-ids EVT123,EVT456
bash {baseDir}/run.sh --type speeches --person "Alice" --event-ids EVT123

Parameters

FlagShortDescription
--query-qSearch term (required for search_* types)
--type-tOperation type (default: search_video)
--top-k-kMax results (default: 10)
--location-lFilter by location name, geocoded via Google Maps (e.g. "Suzhou")
--afterOnly results after this date (YYYY-MM-DD or YYYY-MM-DDTHH:MM:SS)
--beforeOnly results before this date
--video-idsComma-separated video IDs (media types)
--person-pFilter by person name(s), comma-separated (portrait types). Use user for self.
--event-idsComma-separated event IDs (portrait types)

Signed URLs

Image and keyframe results include a signed_url field — a temporary (1-hour) direct link to view/download from Google Cloud Storage. No authentication needed, but they expire after 1 hour.

Types reference

Media search types (require --query)

TypeWhat it searchesSupports
search_videoVideo summaries by meaning--location, --after/before
search_imageImage descriptions by meaning--location, --after/before
search_audioAudio transcripts by meaning--video-ids, --after/before
search_visualVisual transcripts by meaning--video-ids, --after/before
search_keyframeKeyframe images by meaning--video-ids, --after/before

Media query types (list/filter)

TypeWhat it returnsRequiresSupports
query_videoRecent videos--location, --after/before
query_imageRecent images--location, --after/before
query_audioAudio transcripts for videos--video-ids--after/before
query_visualVisual transcripts for videos--video-ids--after/before
query_keyframeKeyframes for videos--video-ids--after/before

Portrait query types (list/filter)

TypeWhat it returnsSupports
traitsPersonality traits, hobbies, interests--person
eventsEvents with participants--person, --after/before, --event-ids
relationshipsHow user relates to people--person
speechesWhat people said--person, --event-ids

Portrait search types (semantic, require --query)

TypeWhat it searchesSupports
search_eventsEvents by meaning--person, --after/before
search_traitsTraits by meaning

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

Huo15 Openclaw Enhance

火一五·克劳德·龙虾增强插件 v5.7.8 — 全面适配 openclaw 2026.4.24:peerDep ^4.24 + build/compat 同步到 4.24 + 14 处 api.on 全部去掉 as any 改成 typed hook(hookName 联合类型 + handler 自动推断 Pl...

Registry SourceRecently Updated
General

Content Trend Analyzer

Aggregates and analyzes content trends across platforms to identify hot topics, user intent, content gaps, and generates data-driven article outlines.

Registry SourceRecently Updated
General

Prompt Debugger

Debug prompts that produce unexpected AI outputs — diagnose failure modes, identify ambiguity and conflicting instructions, test variations, compare model re...

Registry SourceRecently Updated
General

Indie Maker News

独行者 Daily - 变现雷达。读对一条新闻,少走一年弯路。每天5分钟,给创业者装上商业雷达。聚焦一人公司、副业、创业变现资讯,智能分类,行动导向。用户下载即能用,无需本地部署!

Registry SourceRecently Updated