litellm

Call 100+ LLM providers through LiteLLM's unified API. Use when you need to call a different model than your primary (e.g., use GPT-4 for code review while running on Claude), compare outputs from multiple models, route to cheaper models for simple tasks, or access models your runtime doesn't natively support.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "litellm" with this command: npx skills add ishaan-jaff/litellm

LiteLLM - Multi-Model LLM Calls

Use LiteLLM when you need to call LLMs beyond your primary model.

When to Use

  • Model comparison: Get outputs from multiple models and compare
  • Specialized routing: Use code-optimized models for code, writing models for prose
  • Cost optimization: Route simple queries to cheaper models
  • Fallback access: Access models your runtime doesn't support

Quick Start

import litellm

# Call any model with unified API
response = litellm.completion(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Explain this code"}]
)
print(response.choices[0].message.content)

Common Patterns

Compare Multiple Models

import litellm

prompt = [{"role": "user", "content": "What's the best approach to X?"}]

models = ["gpt-4o", "claude-sonnet-4-20250514", "gemini/gemini-1.5-pro"]
for model in models:
    resp = litellm.completion(model=model, messages=prompt)
    print(f"{model}: {resp.choices[0].message.content[:200]}...")

Route by Task Type

import litellm

def smart_call(task_type: str, prompt: str) -> str:
    model_map = {
        "code": "gpt-4o",           # Strong at code
        "writing": "claude-sonnet-4-20250514",  # Strong at prose
        "simple": "gpt-4o-mini",    # Cheap for simple tasks
        "reasoning": "o1-preview",  # Deep reasoning
    }
    model = model_map.get(task_type, "gpt-4o")
    resp = litellm.completion(
        model=model,
        messages=[{"role": "user", "content": prompt}]
    )
    return resp.choices[0].message.content

Use LiteLLM Proxy (Recommended)

If a LiteLLM proxy is available, point to it for caching, rate limiting, and observability:

import litellm

litellm.api_base = "https://your-litellm-proxy.com"
litellm.api_key = "sk-your-key"

response = litellm.completion(
    model="gpt-4o",  # Proxy routes to configured provider
    messages=[{"role": "user", "content": "Hello"}]
)

Environment Setup

Ensure litellm is installed and API keys are set:

pip install litellm

# Set provider keys (or configure in proxy)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-..."

Model Reference

Common model identifiers:

  • OpenAI: gpt-4o, gpt-4o-mini, o1-preview, o1-mini
  • Anthropic: claude-sonnet-4-20250514, claude-opus-4-20250514
  • Google: gemini/gemini-1.5-pro, gemini/gemini-1.5-flash
  • Mistral: mistral/mistral-large-latest

Full list: https://docs.litellm.ai/docs/providers

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

Codehooks Backend

Deploy serverless backends for REST APIs, webhooks, data storage, scheduled jobs, queue workers, and autonomous workflows.

Registry SourceRecently Updated
7420canuto
Coding

Dlazy Image Generate

Image generation skill. Automatically selects the best dlazy CLI image model based on the prompt.

Registry SourceRecently Updated
Coding

Dlazy Generate

A comprehensive generation skill. Can generate images, videos, and audio by automatically selecting the appropriate dlazy CLI model.

Registry SourceRecently Updated
Coding

Dlazy Audio Generate

Audio generation skill. Automatically selects the best dlazy CLI audio/TTS model based on the prompt. 音频生成技能。根据提示词自动选择最佳的 dlazy CLI 音频/TTS 模型。

Registry SourceRecently Updated