glove

Glove Framework — Development Guide

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "glove" with this command: npx skills add porkytheblack/glove/porkytheblack-glove-glove

Glove Framework — Development Guide

You are an expert on the Glove framework. Use this knowledge when writing, debugging, or reviewing Glove code.

What Glove Is

Glove is an open-source TypeScript framework for building AI-powered applications. Users describe what they want in conversation, and an AI decides which capabilities (tools) to invoke. Developers define tools and renderers; Glove handles the agent loop.

Repository: https://github.com/porkytheblack/glove Docs site: https://glove.dterminal.net License: MIT (dterminal)

Package Overview

Package Purpose Install

glove-core

Runtime engine: agent loop, tool execution, display manager, model adapters, stores pnpm add glove-core

glove-react

React hooks (useGlove ), GloveClient , GloveProvider , defineTool , <Render> , MemoryStore , ToolConfig with colocated renderers pnpm add glove-react

glove-next

One-line Next.js API route handler (createChatHandler ) for streaming SSE pnpm add glove-next

Most projects need just glove-react

  • glove-next . glove-core is included as a dependency of glove-react . For server-side or non-React agents, use glove-core directly — see Server-Side Agents below.

Architecture at a Glance

User message → Agent Loop → Model decides tool calls → Execute tools → Feed results back → Loop until done ↓ Display Stack (pushAndWait / pushAndForget) ↓ React renders UI slots

Core Concepts

  • Agent — AI coordinator that replaces router/navigation logic. Reads tools, decides which to call.

  • Tool — A capability: name, description, inputSchema (Zod), do function, optional render

  • renderResult .
  • Display Stack — Stack of UI slots tools push onto. pushAndWait blocks tool; pushAndForget doesn't.

  • Display Strategy — Controls slot visibility lifecycle: "stay" , "hide-on-complete" , "hide-on-new" .

  • renderData — Client-only data returned from do() that is NOT sent to the AI model. Used by renderResult for history rendering.

  • Adapter — Pluggable interfaces for Model, Store, DisplayManager, and Subscriber. Swap providers without changing app code.

  • Context Compaction — Auto-summarizes long conversations to stay within context window limits. The store preserves full message history (so frontends can display the entire chat), while Context.getMessages() splits at the last compaction summary so the model only sees post-compaction context. Summary messages are marked with is_compaction: true .

Quick Start (Next.js)

  1. Install

pnpm add glove-core glove-react glove-next zod

  1. Server route

// app/api/chat/route.ts import { createChatHandler } from "glove-next";

export const POST = createChatHandler({ provider: "anthropic", // or "openai", "openrouter", "gemini", etc. model: "claude-sonnet-4-20250514", });

Set ANTHROPIC_API_KEY (or OPENAI_API_KEY , etc.) in .env.local .

  1. Define tools with defineTool

// lib/glove.tsx import { GloveClient, defineTool } from "glove-react"; import type { ToolConfig } from "glove-react"; import { z } from "zod";

const inputSchema = z.object({ question: z.string().describe("The question to display"), options: z.array(z.object({ label: z.string().describe("Display text"), value: z.string().describe("Value returned when selected"), })), });

const askPreferenceTool = defineTool({ name: "ask_preference", description: "Present options for the user to choose from.", inputSchema, displayPropsSchema: inputSchema, // Zod schema for display props resolveSchema: z.string(), // Zod schema for resolve value displayStrategy: "hide-on-complete", // Hide slot after user responds async do(input, display) { const selected = await display.pushAndWait(input); // typed! return { status: "success" as const, data: User selected: ${selected}, // sent to AI renderData: { question: input.question, selected }, // client-only }; }, render({ props, resolve }) { // typed props, typed resolve return ( <div> <p>{props.question}</p> {props.options.map(opt => ( <button key={opt.value} onClick={() => resolve(opt.value)}> {opt.label} </button> ))} </div> ); }, renderResult({ data }) { // renders from history const { question, selected } = data as { question: string; selected: string }; return <div><p>{question}</p><span>Selected: {selected}</span></div>; }, });

// Tools without display stay as raw ToolConfig const getDateTool: ToolConfig = { name: "get_date", description: "Get today's date", inputSchema: z.object({}), async do() { return { status: "success", data: new Date().toLocaleDateString() }; }, };

export const gloveClient = new GloveClient({ endpoint: "/api/chat", systemPrompt: "You are a helpful assistant.", tools: [askPreferenceTool, getDateTool], });

  1. Provider + Render

// app/providers.tsx "use client"; import { GloveProvider } from "glove-react"; import { gloveClient } from "@/lib/glove";

export function Providers({ children }: { children: React.ReactNode }) { return <GloveProvider client={gloveClient}>{children}</GloveProvider>; }

// app/page.tsx — using <Render> component "use client"; import { useGlove, Render } from "glove-react";

export default function Chat() { const glove = useGlove();

return ( <Render glove={glove} strategy="interleaved" renderMessage={({ entry }) => ( <div><strong>{entry.kind === "user" ? "You" : "AI"}:</strong> {entry.text}</div> )} renderStreaming={({ text }) => <div style={{ opacity: 0.7 }}>{text}</div>} /> ); }

Or use useGlove() directly for full manual control:

// app/page.tsx — manual rendering "use client"; import { useState } from "react"; import { useGlove } from "glove-react";

export default function Chat() { const { timeline, streamingText, busy, slots, sendMessage, renderSlot, renderToolResult } = useGlove(); const [input, setInput] = useState("");

return ( <div> {timeline.map((entry, i) => ( <div key={i}> {entry.kind === "user" && <p><strong>You:</strong> {entry.text}</p>} {entry.kind === "agent_text" && <p><strong>AI:</strong> {entry.text}</p>} {entry.kind === "tool" && ( <> <p>Tool: {entry.name} — {entry.status}</p> {entry.renderData !== undefined && renderToolResult(entry)} </> )} </div> ))} {streamingText && <p style={{ opacity: 0.7 }}>{streamingText}</p>} {slots.map(renderSlot)} <form onSubmit={(e) => { e.preventDefault(); sendMessage(input.trim()); setInput(""); }}> <input value={input} onChange={(e) => setInput(e.target.value)} disabled={busy} /> <button type="submit" disabled={busy}>Send</button> </form> </div> ); }

Server-Side Agents

For CLI tools, backend services, WebSocket servers, or any non-browser environment, use glove-core directly. No React, Next.js, or browser required.

Minimal Setup

import { Glove, Displaymanager, createAdapter } from "glove-core"; import z from "zod";

// In-memory store (see MemoryStore below) or SqliteStore for persistence const store = new MemoryStore("my-session");

const agent = new Glove({ store, model: createAdapter({ provider: "anthropic", stream: true }), displayManager: new Displaymanager(), // required but can be empty systemPrompt: "You are a helpful assistant.", compaction_config: { compaction_instructions: "Summarize the conversation.", }, }) .fold({ name: "search", description: "Search the database.", inputSchema: z.object({ query: z.string() }), async do(input) { const results = await db.search(input.query); return { status: "success", data: results }; }, }) .build();

const result = await agent.processRequest("Find recent orders"); console.log(result.messages[0]?.text);

Minimal MemoryStore

import type { StoreAdapter, Message } from "glove-core";

class MemoryStore implements StoreAdapter { identifier: string; private messages: Message[] = []; private tokenCount = 0; private turnCount = 0;

constructor(id: string) { this.identifier = id; }

async getMessages() { return this.messages; } async appendMessages(msgs: Message[]) { this.messages.push(...msgs); } async getTokenCount() { return this.tokenCount; } async addTokens(count: number) { this.tokenCount += count; } async getTurnCount() { return this.turnCount; } async incrementTurn() { this.turnCount++; } async resetCounters() { this.tokenCount = 0; this.turnCount = 0; } }

For persistent storage: new SqliteStore({ dbPath: "./agent.db", sessionId: "abc" }) .

Key Differences from React

React (glove-react ) Server-side (glove-core )

defineTool with render /renderResult

.fold() with just do — no renderers needed

useGlove() hook manages state Call agent.processRequest() directly

GloveClient

  • GloveProvider

new Glove({...}).build()

createEndpointModel (SSE client) createAdapter() or direct adapter (e.g. new AnthropicAdapter() )

MemoryStore from glove-react Implement StoreAdapter yourself or use SqliteStore

Tools Without Display

Most server-side tools ignore the display manager — just return a result:

gloveBuilder.fold({ name: "get_weather", description: "Get weather for a city.", inputSchema: z.object({ city: z.string() }), async do(input) { const res = await fetch(https://wttr.in/${input.city}?format=j1); return { status: "success", data: await res.json() }; }, });

Returning a plain string also works — auto-wrapped to { status: "success", data: yourString } .

Interactive Tools (pushAndWait)

When a tool calls display.pushAndWait() , the agent loop blocks until dm.resolve(slotId, value) is called. Wire this to your UI layer (WebSocket, terminal, Slack, etc.):

// Tool side async do(input, display) { const confirmed = await display.pushAndWait({ renderer: "confirm", input: { message: Delete ${input.file}? }, }); if (!confirmed) return { status: "error", data: null, message: "Cancelled" }; // proceed... }

// Server side — resolve when user responds dm.resolve(slotId, true);

Subscribers (Logging, Forwarding)

import type { SubscriberAdapter } from "glove-core";

const logger: SubscriberAdapter = { async record(event_type, data) { if (event_type === "text_delta") process.stdout.write((data as any).text); if (event_type === "tool_use") console.log(\n[tool] ${(data as any).name}); }, };

gloveBuilder.addSubscriber(logger);

Common Patterns

  • CLI script: Build agent, call processRequest() , print result

  • Multi-turn REPL: Loop with readline, each processRequest() accumulates in the store

  • WebSocket server: Per-connection session with isolated store/dm/subscriber, forward events via record()

  • Background worker: Build agent per job, process from a queue, no display needed

  • Hot-swap model: Call agent.setModel(newAdapter) at runtime

Optional Store Features

  • Tasks (getTasks , addTasks , updateTask ): Auto-registers glove_update_tasks tool

  • Permissions (getPermission , setPermission ): Tools with requiresPermission: true check consent

If your store doesn't implement these, they're silently disabled.

Display Stack Patterns

pushAndForget — Show results (non-blocking)

async do(input, display) { const data = await fetchData(input); await display.pushAndForget({ input: data }); // Shows UI, tool continues return { status: "success", data: "Displayed results", renderData: data }; }, render({ data }) { return <Card>{data.title}</Card>; }, renderResult({ data }) { return <Card>{(data as any).title}</Card>; // Same card from history },

pushAndWait — Collect user input (blocking)

async do(input, display) { const confirmed = await display.pushAndWait({ input }); // Pauses until user responds return { status: "success", data: confirmed ? "Confirmed" : "Cancelled", renderData: { confirmed }, }; }, render({ data, resolve }) { return ( <div> <p>{data.message}</p> <button onClick={() => resolve(true)}>Yes</button> <button onClick={() => resolve(false)}>No</button> </div> ); }, renderResult({ data }) { const { confirmed } = data as { confirmed: boolean }; return <div>{confirmed ? "Confirmed" : "Cancelled"}</div>; },

Display Strategies

Strategy Behavior Use for

"stay" (default) Slot always visible Info cards, results

"hide-on-complete"

Hidden when slot is resolved Forms, confirmations, pickers

"hide-on-new"

Hidden when newer slot from same tool appears Cart summaries, status panels

SlotRenderProps

Prop Type Description

data

T

Input passed to pushAndWait/pushAndForget

resolve

(value: unknown) => void

Resolves the slot. For pushAndWait, the value returns to do . For pushAndForget, use resolve() or removeSlot(id) to dismiss.

reject

(reason?: string) => void

Rejects the slot. For pushAndWait, this causes the promise to reject. Use for cancellation flows.

Tool Definition

defineTool (recommended for tools with UI)

import { defineTool } from "glove-react";

const tool = defineTool({ name: string, description: string, inputSchema: z.ZodType, // Zod schema for tool input displayPropsSchema?: z.ZodType, // Zod schema for display props (recommended for tools with UI) resolveSchema?: z.ZodType, // Zod schema for resolve value (omit for pushAndForget-only) displayStrategy?: SlotDisplayStrategy, requiresPermission?: boolean, unAbortable?: boolean, // Tool runs to completion even if abort signal fires (e.g. voice barge-in) do(input, display): Promise<ToolResultData>, // display is TypedDisplay<D, R> render?({ props, resolve, reject }): ReactNode, renderResult?({ data, output, status }): ReactNode, });

Key points:

  • do() should return { status, data, renderData } — data goes to model, renderData stays client-only

  • render() gets typed props (matching displayPropsSchema) and typed resolve (matching resolveSchema)

  • renderResult() receives renderData for showing read-only views from history

  • displayPropsSchema is optional but recommended — tools without display should use raw ToolConfig

ToolConfig (for tools without UI or manual control)

interface ToolConfig<I = any> { name: string; description: string; inputSchema: z.ZodType<I>; do: (input: I, display: ToolDisplay) => Promise<ToolResultData>; render?: (props: SlotRenderProps) => ReactNode; renderResult?: (props: ToolResultRenderProps) => ReactNode; displayStrategy?: SlotDisplayStrategy; requiresPermission?: boolean; unAbortable?: boolean; }

ToolResultData

interface ToolResultData { status: "success" | "error"; data: unknown; // Sent to the AI model message?: string; // Error message (for status: "error") renderData?: unknown; // Client-only — NOT sent to model, used by renderResult }

Important: Model adapters explicitly strip renderData before sending to the AI. This makes it safe to store sensitive client-only data (e.g., email addresses, UI state) in renderData .

<Render> Component

Headless render component that replaces manual timeline rendering:

import { Render } from "glove-react";

<Render glove={gloveHandle} // return value of useGlove() strategy="interleaved" // "interleaved" | "slots-before" | "slots-after" | "slots-only" renderMessage={({ entry, index, isLast }) => ...} renderToolStatus={({ entry, index, hasSlot }) => ...} renderStreaming={({ text }) => ...} renderInput={({ send, busy, abort }) => ...} renderSlotContainer={({ slots, renderSlot }) => ...} as="div" // wrapper element className="chat" />

Features:

  • Automatic slot visibility based on displayStrategy

  • Automatic renderResult rendering for completed tools with renderData

  • Interleaving: slots appear inline next to their tool call

  • Sensible defaults for all render props

GloveHandle Interface

The interface consumed by <Render> , returned by useGlove() :

interface GloveHandle { timeline: TimelineEntry[]; streamingText: string; busy: boolean; slots: EnhancedSlot[]; sendMessage: (text: string, images?: { data: string; media_type: string }[]) => void; abort: () => void; renderSlot: (slot: EnhancedSlot) => ReactNode; renderToolResult: (entry: ToolEntry) => ReactNode; resolveSlot: (slotId: string, value: unknown) => void; rejectSlot: (slotId: string, reason?: string) => void; }

useGlove Hook Return

Property Type Description

timeline

TimelineEntry[]

Messages + tool calls

streamingText

string

Current streaming buffer

busy

boolean

Agent is processing

isCompacting

boolean

Context compaction in progress (driven by compaction_start /compaction_end events)

slots

EnhancedSlot[]

Active display stack with metadata

tasks

Task[]

Agent task list

stats

GloveStats

{ turns, tokens_in, tokens_out }

sendMessage(text, images?)

void

Send user message

abort()

void

Cancel current request

renderSlot(slot)

ReactNode

Render a display slot

renderToolResult(entry)

ReactNode

Render a tool result from history

resolveSlot(id, value)

void

Resolve a pushAndWait slot

rejectSlot(id, reason?)

void

Reject a pushAndWait slot

TimelineEntry

type TimelineEntry = | { kind: "user"; text: string; images?: string[] } | { kind: "agent_text"; text: string } | { kind: "tool"; id: string; name: string; input: unknown; status: "running" | "success" | "error"; output?: string; renderData?: unknown };

type ToolEntry = Extract<TimelineEntry, { kind: "tool" }>;

Supported Providers

Provider Env Variable Default Model SDK Format

openai

OPENAI_API_KEY

gpt-4.1

openai

anthropic

ANTHROPIC_API_KEY

claude-sonnet-4-20250514

anthropic

openrouter

OPENROUTER_API_KEY

anthropic/claude-sonnet-4

openai

gemini

GEMINI_API_KEY

gemini-2.5-flash

openai

minimax

MINIMAX_API_KEY

MiniMax-M2.5

openai

kimi

MOONSHOT_API_KEY

kimi-k2.5

openai

glm

ZHIPUAI_API_KEY

glm-4-plus

openai

Pre-built Tool Registry

Available at https://glove.dterminal.net/tools — copy-paste into your project:

  • confirm_action — Yes/No confirmation dialog

  • collect_form — Multi-field form

  • ask_preference — Single-select preference picker

  • text_input — Free-text input

  • show_info_card — Info/success/warning card (pushAndForget)

  • suggest_options — Multiple-choice suggestions

  • approve_plan — Step-by-step plan approval

Voice Integration (glove-voice )

Package Overview

Package Purpose Install

glove-voice

Voice pipeline: GloveVoice , adapters (STT/TTS/VAD), AudioCapture , AudioPlayer

pnpm add glove-voice

glove-react/voice

React hooks: useGloveVoice , useGlovePTT , VoicePTTButton

Included in glove-react

glove-next

Token handlers: createVoiceTokenHandler (already in glove-next , no separate import) Included in glove-next

Architecture

Mic → VAD → STTAdapter → glove.processRequest() → TTSAdapter → Speaker

GloveVoice wraps a Glove instance with a full-duplex voice pipeline. Glove remains the intelligence layer — all tools, display stack, and context management work normally. STT and TTS are swappable adapters. Text tokens stream through a SentenceBuffer into TTS in real-time.

Quick Start (Next.js + ElevenLabs)

Step 1: Token routes — server-side handlers that exchange your API key for short-lived tokens

// app/api/voice/stt-token/route.ts import { createVoiceTokenHandler } from "glove-next"; export const GET = createVoiceTokenHandler({ provider: "elevenlabs", type: "stt" });

// app/api/voice/tts-token/route.ts import { createVoiceTokenHandler } from "glove-next"; export const GET = createVoiceTokenHandler({ provider: "elevenlabs", type: "tts" });

Set ELEVENLABS_API_KEY in .env.local .

Step 2: Client voice config

// app/lib/voice.ts import { createElevenLabsAdapters } from "glove-voice";

async function fetchToken(path: string): Promise<string> { const res = await fetch(path); const data = await res.json(); return data.token; }

export const { stt, createTTS } = createElevenLabsAdapters({ getSTTToken: () => fetchToken("/api/voice/stt-token"), getTTSToken: () => fetchToken("/api/voice/tts-token"), voiceId: "JBFqnCBsd6RMkjVDRZzb", });

Step 3: SileroVAD — dynamic import for SSR safety

export async function createSileroVAD() { const { SileroVADAdapter } = await import("glove-voice/silero-vad"); const vad = new SileroVADAdapter({ positiveSpeechThreshold: 0.5, negativeSpeechThreshold: 0.35, wasm: { type: "cdn" }, }); await vad.init(); return vad; }

Step 4: React hook

const { runnable } = useGlove({ tools, sessionId }); const voice = useGloveVoice({ runnable, voice: { stt, createTTS, vad } }); // voice.mode, voice.isActive, voice.isMuted, voice.error, voice.transcript // voice.start(), voice.stop(), voice.interrupt(), voice.commitTurn() // voice.mute(), voice.unmute() — gate mic audio to STT/VAD // voice.narrate("text") — speak text via TTS without model (returns Promise)

startMuted Config Option

In manual (push-to-talk) mode, the pipeline now starts muted by default — no need to call mute() immediately after start() . This eliminates the race condition where audio leaks in the gap.

// Manual mode auto-mutes — just works await voice.start(); // already muted in manual mode

// Explicit override const voice = useGloveVoice({ runnable, voice: { stt, createTTS, turnMode: "manual", startMuted: false }, // opt out });

enabled State on useGloveVoice

The hook now exposes voice.enabled — tracks user intent (true after start() , false after stop() or pipeline death). Replaces the manual useState

  • sync useEffect pattern:

// Before — consumer tracks + syncs const [voiceEnabled, setVoiceEnabled] = useState(false); useEffect(() => { if (voiceEnabled && !voice.isActive) setVoiceEnabled(false); }, [voiceEnabled, voice.isActive]);

// After — hook tracks it voice.enabled // auto-resets on pipeline death

useGlovePTT Hook (Push-to-Talk)

High-level hook that encapsulates the entire PTT lifecycle. Reduces ~80 lines of boilerplate to ~5 lines:

import { useGlovePTT } from "glove-react/voice";

const glove = useGlove({ endpoint: "/api/chat", tools }); const ptt = useGlovePTT({ runnable: glove.runnable, voice: { stt, createTTS }, // turnMode forced to "manual" hotkey: "Space", // default, auto-guards INPUT/TEXTAREA holdThreshold: 300, // click-vs-hold discrimination (ms) minRecordingMs: 350, // min audio before committing });

// ptt.enabled — is the pipeline active // ptt.recording — is the user currently holding // ptt.processing — is STT finalizing // ptt.mode — voice mode (idle/listening/thinking/speaking) // ptt.transcript — partial transcript while recording // ptt.error — last error // ptt.toggle() — enable/disable the pipeline // ptt.interrupt() — barge-in // ptt.bind — { onPointerDown, onPointerUp, onPointerLeave }

<button {...ptt.bind}><MicIcon /></button>

<VoicePTTButton> Component

Headless (unstyled) component with render prop for the mic button:

import { VoicePTTButton } from "glove-react/voice";

<VoicePTTButton ptt={ptt}> {({ enabled, recording, mode }) => ( <button className={recording ? "active" : ""}> <MicIcon /> {enabled && <StatusDot />} </button> )} </VoicePTTButton>

Includes click-vs-hold discrimination, pointer leave safety, and aria attributes.

<Render> Voice Support

<Render> accepts an optional voice prop to auto-render transcript and voice status:

<Render glove={glove} voice={ptt} // or useGloveVoice() return renderTranscript={...} // optional custom renderer renderVoiceStatus={...} // optional custom renderer renderInput={() => null} />

Turn Modes

Mode Behavior Use for

"vad" (default) Auto speech detection + barge-in Hands-free, voice-first apps

"manual"

Push-to-talk, explicit commitTurn()

Noisy environments, precise control

Narration + Mic Control

  • voice.narrate(text) — Speak arbitrary text through TTS without the model. Resolves when audio finishes. Auto-mutes mic during narration. Abortable via interrupt() . Safe to call from pushAndWait tool handlers.

  • voice.mute() / voice.unmute() — Gate mic audio forwarding to STT/VAD. audio_chunk events still fire when muted (for visualization).

  • audio_chunk event — Raw Int16Array PCM from the mic, emitted even when muted. Use for waveform/level visualization.

  • Compaction silence — Voice automatically ignores text_delta during context compaction so the summary is never narrated.

Voice-First Tool Design

  • Use pushAndForget instead of pushAndWait — blocking tools that wait for clicks are unusable in voice mode

  • Return descriptive text in data — the LLM reads it to formulate spoken responses

  • Add a voice-specific system prompt — instruct the agent to narrate results concisely

  • Use narrate() for slot narration — read display content aloud from within tool handlers

Supported Voice Providers

Provider Token Handler Config Env Variable

ElevenLabs { provider: "elevenlabs", type: "stt" | "tts" }

ELEVENLABS_API_KEY

Deepgram { provider: "deepgram" }

DEEPGRAM_API_KEY

Cartesia { provider: "cartesia" }

CARTESIA_API_KEY

Supporting Files

For detailed API reference, see api-reference.md. For example patterns from real implementations, see examples.md.

Common Gotchas

  • model_response_complete vs model_response: Streaming adapters emit model_response_complete , not model_response . Subscribers must handle both.

  • Closure capture in React hooks: When re-keying sessions, use mutable let currentKey = key to avoid stale closures.

  • React useEffect timing: State updates don't take effect in the same render cycle — guard with early returns.

  • Browser-safe imports: glove-core barrel exports include native deps (better-sqlite3). For browser code, import from subpaths: glove-core/core , glove-core/glove , glove-core/display-manager , glove-core/tools/task-tool .

  • Displaymanager casing: The concrete class is Displaymanager (lowercase 'm'), not DisplayManager . Import it as: import { Displaymanager } from "glove-core/display-manager" .

  • createAdapter stream default: stream defaults to true , not false . Pass stream: false explicitly if you want synchronous responses.

  • Tool return values: The do function should return ToolResultData with { status, data, renderData? } . data goes to the AI; renderData stays client-only.

  • Zod .describe(): Always add .describe() to schema fields — the AI reads these descriptions to understand what to provide.

  • displayPropsSchema is optional but recommended: defineTool 's displayPropsSchema is optional, but recommended for tools with display UI — tools without display should use raw ToolConfig instead.

  • renderData is stripped by model adapters: Model adapters explicitly exclude renderData when formatting tool results for the AI, so it's safe for client-only data.

  • SileroVAD must use dynamic import: Never import glove-voice/silero-vad at module level in Next.js/SSR. Use await import("glove-voice/silero-vad") to avoid pulling WASM into the server bundle.

  • Next.js transpilePackages: Add "glove-voice" to transpilePackages in next.config.ts so Next.js processes the ES module.

  • createTTS must be a factory: GloveVoice calls it once per turn to get a fresh TTS adapter. Pass () => new ElevenLabsTTSAdapter(...) , not a single instance.

  • Barge-in protection requires unAbortable : A pushAndWait resolver suppresses voice barge-in at the trigger level (GloveVoice skips interrupt() when resolverStore.size > 0 ). But that alone doesn't protect the tool — if interrupt() is called by other means, only unAbortable: true on the tool guarantees it runs to completion despite the abort signal. Use both together for mutation-critical tools like checkout. Use pushAndForget for voice-first tools.

  • Empty committed transcripts: ElevenLabs Scribe may return empty committed transcripts for short utterances. The adapter auto-falls back to the last partial transcript.

  • TTS idle timeout: ElevenLabs TTS WebSocket disconnects after ~20s idle. GloveVoice handles this by closing TTS after each model_response_complete and opening a fresh session on next text_delta.

  • onnxruntime-web build warnings: Critical dependency: require function is used in a way... warnings from onnxruntime-web are expected and harmless.

  • Audio sample rate: All adapters must agree on 16kHz mono PCM (the default). Don't change unless your provider explicitly requires something different.

  • narrate() auto-mutes mic: voice.narrate() automatically mutes the mic during playback to prevent TTS audio from feeding back into STT/VAD. It restores the previous mute state when done.

  • narrate() needs a started pipeline: Calling narrate() before voice.start() throws. The TTS factory and AudioPlayer must be initialized.

  • Voice auto-silences during compaction: When context compaction is triggered, the voice pipeline ignores all text_delta events between compaction_start and compaction_end . The compaction summary is never narrated.

  • isCompacting for React UI feedback: GloveState.isCompacting is true while compaction is in progress. Use it to show a loading indicator or disable input during compaction.

  • <Render> ships a default input: If you have a custom input form, always pass renderInput={() => null} to suppress the built-in one — otherwise you get duplicate inputs.

  • Tools execute outside React: Tool do() functions run outside the component tree. To access React context (e.g. useWallet() ), use a mutable singleton ref synced from a React component (bridge pattern).

  • SileroVAD not needed for manual mode: When using turnMode: "manual" (push-to-talk), skip the SileroVAD import and its WASM overhead. VAD is only needed for turnMode: "vad" .

  • System prompt: document tools explicitly: Even though tools have descriptions and schemas, listing every tool with its parameters in the system prompt dramatically improves tool selection accuracy.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

station

No summary provided by upstream source.

Repository SourceNeeds Review
General

tauri-deployment-setup

No summary provided by upstream source.

Repository SourceNeeds Review
General

oasis-server-setup

No summary provided by upstream source.

Repository SourceNeeds Review
General

oasis-setup

No summary provided by upstream source.

Repository SourceNeeds Review