litellm

Call 100+ LLM providers through LiteLLM's unified API. Use when you need to call a different model than your primary (e.g., use GPT-4 for code review while running on Claude), compare outputs from multiple models, route to cheaper models for simple tasks, or access models your runtime doesn't natively support.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "litellm" with this command: npx skills add ishaan-jaff/litellm

LiteLLM - Multi-Model LLM Calls

Use LiteLLM when you need to call LLMs beyond your primary model.

When to Use

  • Model comparison: Get outputs from multiple models and compare
  • Specialized routing: Use code-optimized models for code, writing models for prose
  • Cost optimization: Route simple queries to cheaper models
  • Fallback access: Access models your runtime doesn't support

Quick Start

import litellm

# Call any model with unified API
response = litellm.completion(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Explain this code"}]
)
print(response.choices[0].message.content)

Common Patterns

Compare Multiple Models

import litellm

prompt = [{"role": "user", "content": "What's the best approach to X?"}]

models = ["gpt-4o", "claude-sonnet-4-20250514", "gemini/gemini-1.5-pro"]
for model in models:
    resp = litellm.completion(model=model, messages=prompt)
    print(f"{model}: {resp.choices[0].message.content[:200]}...")

Route by Task Type

import litellm

def smart_call(task_type: str, prompt: str) -> str:
    model_map = {
        "code": "gpt-4o",           # Strong at code
        "writing": "claude-sonnet-4-20250514",  # Strong at prose
        "simple": "gpt-4o-mini",    # Cheap for simple tasks
        "reasoning": "o1-preview",  # Deep reasoning
    }
    model = model_map.get(task_type, "gpt-4o")
    resp = litellm.completion(
        model=model,
        messages=[{"role": "user", "content": prompt}]
    )
    return resp.choices[0].message.content

Use LiteLLM Proxy (Recommended)

If a LiteLLM proxy is available, point to it for caching, rate limiting, and observability:

import litellm

litellm.api_base = "https://your-litellm-proxy.com"
litellm.api_key = "sk-your-key"

response = litellm.completion(
    model="gpt-4o",  # Proxy routes to configured provider
    messages=[{"role": "user", "content": "Hello"}]
)

Environment Setup

Ensure litellm is installed and API keys are set:

pip install litellm

# Set provider keys (or configure in proxy)
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-..."

Model Reference

Common model identifiers:

  • OpenAI: gpt-4o, gpt-4o-mini, o1-preview, o1-mini
  • Anthropic: claude-sonnet-4-20250514, claude-opus-4-20250514
  • Google: gemini/gemini-1.5-pro, gemini/gemini-1.5-flash
  • Mistral: mistral/mistral-large-latest

Full list: https://docs.litellm.ai/docs/providers

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

Content Collector

个人内容收藏与知识管理系统。收藏、整理、检索、二创。 Use when: (1) 用户分享链接/文字/截图并要求保存或收藏, (2) 用户说"收藏这个"/"存一下"/"记录下来"/"save this"/"bookmark"/"clip this", (3) 用户要求按关键词/标签搜索之前收藏的内容, (4) 用...

Registry SourceRecently Updated
Coding

Github Stars Tracker

GitHub 仓库 Stars 变化监控与通知。追踪指定仓库的 star 增长、fork 变化,发现新趋势。适合开发者关注项目动态。

Registry SourceRecently Updated
Coding

RabbitMQ client guide for Tencent Cloud TDMQ

RabbitMQ 客户端代码指南。当用户需要编写、调试或审查 RabbitMQ 应用代码时使用。涵盖:用任意语言(Java/Go/Python/PHP/.NET)写生产者或消费者;排查连接暴增、消息丢失、Broken pipe、消费慢、漏消费等客户端问题;审查 spring-boot-starter-amqp、a...

Registry SourceRecently Updated