vercel-ai-sdk-best-practices

Vercel Ai Sdk Best Practices Skill

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "vercel-ai-sdk-best-practices" with this command: npx skills add oimiragieo/agent-studio/oimiragieo-agent-studio-vercel-ai-sdk-best-practices

Vercel Ai Sdk Best Practices Skill

  • Use streamText for streaming text responses from AI models.

  • Use streamObject for streaming structured JSON responses.

  • Implement proper error handling with onFinish callback.

  • Use onChunk for real-time UI updates during streaming.

  • Prefer server-side streaming for better performance and security.

  • Use smoothStream for smoother streaming experiences.

  • Implement proper loading states for AI responses.

  • Use useChat for client-side chat interfaces when needed.

  • Use useCompletion for client-side text completion interfaces.

  • Handle rate limiting and quota management appropriately.

  • Implement proper authentication and authorization for AI endpoints.

  • Use environment variables for API keys and sensitive configuration.

  • Cache AI responses when appropriate to reduce costs.

  • Implement proper logging for debugging and monitoring.

Iron Laws

  • ALWAYS use streaming responses with streamText or streamObject for AI outputs rather than blocking calls

  • NEVER expose API keys or model provider secrets in client-side code — use server-only route handlers

  • ALWAYS implement error boundaries and loading states for streaming AI responses in React components

  • NEVER call AI SDK functions directly from Client Components — use Server Actions or API routes

  • ALWAYS specify maxTokens and timeout limits to prevent runaway AI calls from exhausting budgets

Anti-Patterns

Anti-Pattern Why It Fails Correct Approach

Blocking generateText in UI routes Hangs the request, poor UX for long responses Use streamText with streaming response

API keys in client-side code Secret exposure, security vulnerability Move AI calls to Server Actions or API routes

No error boundary for streaming Uncaught errors break the entire component tree Wrap streaming components in error boundaries

Calling AI SDK in Client Components Exposes provider keys, breaks SSR Use Server Actions ("use server" ) or route handlers

No token or timeout limits Runaway calls exhaust credits and stall users Always set maxTokens and request timeout

Memory Protocol (MANDATORY)

Before starting:

cat .claude/context/memory/learnings.md

After completing: Record any new patterns or exceptions discovered.

ASSUME INTERRUPTION: Your context may reset. If it's not in memory, it didn't happen.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

filesystem

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

slack-notifications

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

chrome-browser

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

text-to-sql

No summary provided by upstream source.

Repository SourceNeeds Review