streaming

assistant-ui Streaming

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "streaming" with this command: npx skills add assistant-ui/skills/assistant-ui-skills-streaming

assistant-ui Streaming

Always consult assistant-ui.com/llms.txt for latest API.

The assistant-stream package handles streaming from AI backends.

References

  • ./references/data-stream.md -- AI SDK data stream format

  • ./references/assistant-transport.md -- Native assistant-ui format

  • ./references/encoders.md -- Encoders and decoders

When to Use

Using Vercel AI SDK? ├─ Yes → toUIMessageStreamResponse() (no assistant-stream needed) └─ No → assistant-stream for custom backends

Installation

npm install assistant-stream

Custom Streaming Response

import { createAssistantStreamResponse } from "assistant-stream";

export async function POST(req: Request) { return createAssistantStreamResponse(async (stream) => { stream.appendText("Hello "); stream.appendText("world!");

// Tool call example
const tool = stream.addToolCallPart({ toolCallId: "1", toolName: "get_weather" });
tool.argsText.append('{"city":"NYC"}');
tool.argsText.close();
tool.setResponse({ result: { temperature: 22 } });

stream.close();

}); }

With useLocalRuntime

useLocalRuntime expects ChatModelRunResult chunks. Yield content parts for streaming:

import { useLocalRuntime } from "@assistant-ui/react";

const runtime = useLocalRuntime({ model: { async *run({ messages, abortSignal }) { const response = await fetch("/api/chat", { method: "POST", body: JSON.stringify({ messages }), signal: abortSignal, });

  const reader = response.body?.getReader();
  const decoder = new TextDecoder();
  let buffer = "";

  while (reader) {
    const { done, value } = await reader.read();
    if (done) break;

    buffer += decoder.decode(value, { stream: true });
    const parts = buffer.split("\n");
    buffer = parts.pop() ?? "";

    for (const chunk of parts.filter(Boolean)) {
      yield { content: [{ type: "text", text: chunk }] };
    }
  }
},

}, });

Debugging Streams

import { AssistantStream, DataStreamDecoder } from "assistant-stream";

const stream = AssistantStream.fromResponse(response, new DataStreamDecoder()); for await (const event of stream) { console.log("Event:", JSON.stringify(event, null, 2)); }

Stream Event Types

  • part-start with part.type = "text" | "reasoning" | "tool-call" | "source" | "file"

  • text-delta with streamed text

  • result with tool results

  • step-start , step-finish , message-finish

  • error strings

Common Gotchas

Stream not updating UI

  • Check Content-Type is text/event-stream

  • Check for CORS errors

Tool calls not rendering

  • addToolCallPart needs both toolCallId and toolName

  • Register tool UI with makeAssistantToolUI

Partial text not showing

  • Use text-delta events for streaming

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

assistant-ui

No summary provided by upstream source.

Repository SourceNeeds Review
General

primitives

No summary provided by upstream source.

Repository SourceNeeds Review
General

tools

No summary provided by upstream source.

Repository SourceNeeds Review