getmem.ai Memory Skill
Persistent memory for your AI agent via getmem.ai.
Setup
Set your API key in the environment:
export GETMEM_API_KEY=gm_live_YOUR_KEY_HERE
Get your key at https://platform.getmem.ai — $20 free credit on signup.
Usage
import getmem_ai as getmem, os
mem = getmem.init(os.environ["GETMEM_API_KEY"])
# Before each LLM call — get relevant memory context
result = mem.get(user_id, query=user_message)
context = result["context"] # inject into system prompt
# After each turn — save both user + assistant messages
mem.ingest(user_id, messages=[
{"role": "user", "content": user_message},
{"role": "assistant", "content": reply},
])
How it works
mem.get()fetches only the relevant memories for the current query (semantic search)- Context is injected into your system prompt — typically 200-800 tokens
mem.ingest()saves the full conversation exchange asynchronously- Memory persists indefinitely — no TTL, no purge
Token savings
Standard approach: full conversation history every turn = 10,000-40,000 tokens With getmem: only relevant context = 200-800 tokens. Save up to 95% on context tokens.
Links
- Website: https://getmem.ai
- Platform: https://platform.getmem.ai
- PyPI: https://pypi.org/project/getmem-ai/
- npm: https://npmjs.com/package/getmem
- OpenClaw plugin: clawhub:getmem-openclaw