agent-memory-layer

Scalable memory system for AI agents with short-term, long-term, and episodic memory. Use when building agent memory persistence, conversation context management, knowledge retrieval, or episodic recall. Covers Redis-backed short-term memory, vector-based long-term memory, and timeline-ordered episodic memory with decay and consolidation.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "agent-memory-layer" with this command: npx skills add evezart/agent-memory-layer

Agent Memory Layer

Three-tier memory system for AI agents: short-term, long-term, and episodic.

Quick Start

from memory_layer import AgentMemory

mem = AgentMemory(agent_id="my-agent")
mem.short_term.add("User prefers dark mode", priority=0.8)
mem.long_term.store("Project uses React + TypeScript", tags=["tech", "project"])
mem.episodic.record("Debugged auth bug", outcome="success", duration_min=15)

# Recall
context = mem.short_term.recall(limit=10)
relevant = mem.long_term.search("frontend framework")
similar = mem.episodic.find_similar("debugging session")

Architecture

┌─────────────────────────────────────────┐
│            Agent Memory                  │
├───────────┬───────────┬─────────────────┤
│ Short-Term│ Long-Term │   Episodic      │
│ (Redis)   │ (Vectors) │  (Timeline)     │
│ TTL: 1hr  │ Permanent │ Decay: 30d      │
│ Hot cache │ Semantic  │ Consolidated    │
└───────────┴───────────┴─────────────────┘

Memory Tiers

Short-Term (Working Memory)

  • Recent context, active conversation, current task state
  • TTL-based expiry (default 1 hour)
  • Priority-weighted retention
  • See references/short-term.md

Long-Term (Knowledge)

  • Persistent facts, preferences, learned patterns
  • Vector similarity search for retrieval
  • Tags and metadata for filtering
  • See references/long-term.md

Episodic (Experience)

  • Timeline-ordered events with outcomes
  • Decay function reduces old episode weight
  • Consolidation moves recurring patterns to long-term
  • See references/episodic.md

Consolidation

Episodic memories that recur are automatically promoted to long-term:

  • If the same outcome occurs 3+ times → store as learned pattern
  • Failed approaches get negative weight in long-term
  • See scripts/consolidate.py

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

V19 Trust Manifesto

Agent Community认知治理协议公开受信声明v1.5.0。V8.6 Agent OS三大终极协议(ITE意图交易引擎三阶段翻译/ASM环境状态监控EventBus自动响应/Dual-Track Consensus双轨共识ConflictSet驱动进化)+6个学术框架对齐(MIA/AIGA/GCL/SCF...

Registry SourceRecently Updated
Automation

NEXO Brain

Cognitive memory system for AI agents — Atkinson-Shiffrin memory model, semantic RAG, trust scoring, and metacognitive error prevention. Gives your agent per...

Registry SourceRecently Updated
Automation

Growth Engineer

Growth Engineer for mobile apps and agent runtimes including OpenClaw and Hermes. Correlate analytics, crashes, billing, feedback, store signals, and repo co...

Registry SourceRecently Updated
7110Profile unavailable
Automation

Agent News

Query verified AI agent news with citations, confidence scores, and Ethics Engine ratings — sourced, not generated. Use instead of generic web search for any...

Registry SourceRecently Updated
1423Profile unavailable