voice-ai-engine-development

Voice AI Engine Development

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "voice-ai-engine-development" with this command: npx skills add dokhacgiakhoa/antigravity-ide/dokhacgiakhoa-antigravity-ide-voice-ai-engine-development

Voice AI Engine Development

Goal: Build low-latency, conversational Voice AI agents capable of full-duplex communication.

  1. The Voice Pipeline (Latency is King)

The total loop Latency (Voice-to-Ear) should be < 1000ms (Ideal < 500ms).

  • Transport: WebRTC (preferred for browser) or WebSocket (server-server).

  • VAD (Voice Activity Detection): Detect when user starts/stops speaking.

  • Tools: Silero VAD, WebRTC VAD.

  • STT (Speech-to-Text): Transcribe audio to text.

  • Tools: Deepgram (fastest), Whisper (high accuracy but slower), AssemblyAI.

  • LLM (Brain): Process text and generate response.

  • Tools: Groq (Llama 3), GPT-4o, Claude 3.5 Sonnet.

  • TTS (Text-to-Speech): Convert response to audio.

  • Tools: ElevenLabs (Quality), Cartesia (Speed), OpenAI TTS.

  1. Architecture Patterns
  • Streaming Pipeline: DO NOT wait for full transcription or full generation. Stream everything.

  • User Audio Stream -> VAD -> STT Stream -> LLM Stream -> TTS Stream -> Audio Output.

  • Interruption Handling (Barge-in):

  • If VAD detects user speech while AI is talking -> Immediately CUT text generation and audio playback. Clear buffers.

  1. Implementation Stack
  • Backend: Python (FastAPI) or Node.js. Python ecosystem is stronger for audio processing (numpy/scipy).

  • Frameworks:

  • Pipecat: Open source framework for building voice agents.

  • LiveKit: WebRTC infrastructure for real-time audio/video.

  • Twilio: For telephony integration.

  1. Optimization Techniques
  • Optimistic VAD: Tune VAD to be sensitive to start, but careful with "silence" timeout (usually 500ms-800ms) to detect end of turn.

  • Prompt Engineering: Instruct LLM to be concise and conversational.

  • System Prompt: "You are a helpful voice assistant. Keep responses short (1-2 sentences). Do not use markdown or emojis."

  • Audio Formats: Use OPUS or PCM (16khz/24khz/48khz) for transmission. Avoid MP3 transcoding latency.

  1. Debugging & Metrics
  • WER (Word Error Rate): For STT accuracy.

  • TTFT (Time to First Token): LLM speed.

  • TTA (Time to Audio): The critical metric. Time from user silence to first AI sound.

Common Pitfalls:

  • Echo cancellation issues (User hears themselves). Use WebRTC's built-in AEC.

  • Hallucination in STT (Whisper transcribing silence).

  • Race conditions during interruptions.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

github-mcp

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

code-review-checklist

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

clean-code

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

python-patterns

No summary provided by upstream source.

Repository SourceNeeds Review