Threadline — Persistent Memory and Context Layer for AI Agents

Your AI agents start from zero. Every session. Users repeat themselves — their stack, their preferences, their ongoing projects. Threadline fixes this in 2 lines. inject() before your LLM call, update() after. Your agent now remembers every user, forever. Structured extraction across 7 scopes, OAuth-style grant system so agents only see permitted context, user-owned data with deletion dashboard, and relevance-scored injection — 200 tokens of signal, not 2,000 tokens of noise. Works with OpenAI, Anthropic, Vercel AI SDK, and LangChain. < 50ms retrieval. GDPR-ready by design. Free tier: 2,500 calls/month.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "Threadline — Persistent Memory and Context Layer for AI Agents" with this command: npx skills add vidursharma202-del/threadline

Threadline — Persistent Memory and Context Layer for AI Agents

Threadline fixes agent amnesia. Two lines of code give your AI agent persistent memory across every user session — with full developer control over what's stored, who sees it, and how it's used.

When to use this skill

Use Threadline whenever the user:

  • Is building an AI agent, chatbot, or assistant
  • Mentions agents forgetting context between sessions
  • Wants to personalise agent responses per user
  • Is using OpenAI, Anthropic, Vercel AI SDK, or LangChain
  • Mentions "context rot", "session reset", "agent memory", or "persistent context"

What Threadline does

Threadline gives your agent a persistent context layer — preferences, tasks, communication style, domain expertise, and key relationships — that travels across every conversation. Agents stop starting from zero.

Relevance-scored injection means your agent gets 200 tokens of signal, not 2,000 tokens of noise. No prompt bloat at scale.

Integration (2 lines)

import { ThreadlineClient } from 'threadline-sdk';

const tl = new ThreadlineClient({ apiKey: process.env.THREADLINE_API_KEY });

// Before your LLM call — inject user context into system prompt
const systemPrompt = await tl.inject(userId, "You are a helpful assistant.");

// After your LLM response — extract and store new facts
await tl.update({ userId, userMessage, agentResponse });

OpenAI example

import OpenAI from 'openai';
import { ThreadlineClient } from 'threadline-sdk';

const openai = new OpenAI();
const tl = new ThreadlineClient({ apiKey: process.env.THREADLINE_API_KEY });

const systemPrompt = await tl.inject(userId, "You are a helpful assistant.");

const response = await openai.chat.completions.create({
  model: "gpt-4o",
  messages: [
    { role: "system", content: systemPrompt },
    { role: "user", content: userMessage }
  ]
});

await tl.update({ userId, userMessage, agentResponse: response.choices[0].message.content });

Anthropic example

import Anthropic from '@anthropic-ai/sdk';
import { ThreadlineClient } from 'threadline-sdk';

const anthropic = new Anthropic();
const tl = new ThreadlineClient({ apiKey: process.env.THREADLINE_API_KEY });

const systemPrompt = await tl.inject(userId, "You are a helpful assistant.");

const response = await anthropic.messages.create({
  model: "claude-opus-4-6",
  max_tokens: 1024,
  system: systemPrompt,
  messages: [{ role: "user", content: userMessage }]
});

await tl.update({ userId, userMessage, agentResponse: response.content[0].text });

Vercel AI SDK example

import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
import { ThreadlineClient } from 'threadline-sdk';

const tl = new ThreadlineClient({ apiKey: process.env.THREADLINE_API_KEY });

const systemPrompt = await tl.inject(userId, "You are a helpful assistant.");

const result = await streamText({
  model: openai('gpt-4o'),
  system: systemPrompt,
  messages,
  onFinish: async ({ text }) => {
    await tl.update({ userId, userMessage, agentResponse: text });
  }
});

LangChain example

import { ChatOpenAI } from '@langchain/openai';
import { SystemMessage, HumanMessage } from '@langchain/core/messages';
import { ThreadlineClient } from 'threadline-sdk';

const tl = new ThreadlineClient({ apiKey: process.env.THREADLINE_API_KEY });
const llm = new ChatOpenAI({ model: "gpt-4o" });

const systemPrompt = await tl.inject(userId, "You are a helpful assistant.");

const response = await llm.invoke([
  new SystemMessage(systemPrompt),
  new HumanMessage(userMessage)
]);

await tl.update({ userId, userMessage, agentResponse: response.content });

7 context scopes

Threadline extracts and stores context across 7 scopes:

ScopeWhat it captures
communication_styleTone, verbosity, format preferences
ongoing_tasksActive projects, deadlines, blockers
key_relationshipsTeam members, clients, collaborators
domain_expertiseTech stack, industry knowledge, skills
preferencesTools, workflows, working style
emotional_stateStress signals, motivation, sentiment
generalEverything else worth remembering

Grant system

Agents only see the scopes they're explicitly granted. A coding assistant sees domain_expertise and ongoing_tasks. A writing assistant sees communication_style and preferences. No agent sees everything by default.

await tl.grant({
  agentId: "coding-assistant",
  userId: userId,
  scopes: ["domain_expertise", "ongoing_tasks"]
});

Rules

  • Always call inject() before the LLM call, never after
  • Always call update() after receiving the agent response
  • Use a stable, consistent userId — this is how context is scoped per user
  • Do not log or expose the enriched system prompt — it contains user context
  • Context is user-owned — users can view and delete via threadline.to/dashboard

REST API (any language)

# Inject
POST https://api.threadline.to/api/inject
Authorization: Bearer YOUR_API_KEY
{ "userId": "user_123", "basePrompt": "You are a helpful assistant." }

# Update
POST https://api.threadline.to/api/update
Authorization: Bearer YOUR_API_KEY
{ "userId": "user_123", "userMessage": "...", "agentResponse": "..." }

Troubleshooting

IssueFix
inject() returns base prompt unchangedCheck API key is set correctly
Context not persistingConfirm update() is being called after every response
Slow injectionRedis-cached — first call ~200ms, subsequent calls <50ms
Wrong user contextEnsure userId is stable and unique per user

Links

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

Sayba Session Cleanup

Clean up stale or unwanted subagent sessions from OpenClaw webchat sidebar. Use when subagent sessions persist in the UI after completion, when the sidebar i...

Registry SourceRecently Updated
Automation

Wiggle Rooms

Talk to other AI agents in a shared chat room by editing a single markdown file. Downloads and runs the `wiggle-rooms` npm daemon, which polls a central serv...

Registry SourceRecently Updated
Automation

Agent Desapetc 999

Billions decentralized identity for agents. Link agents to human identities using Billions ERC-8004 and Attestation Registries. Verify and generate authentic...

Registry SourceRecently Updated
Automation

neuropay

Gère bots, services, profils, commandes et avis sur la marketplace NeuroPay via API REST en stockant et réutilisant la clé API utilisateur.

Registry SourceRecently Updated