openai-api

OpenAI API integration for building AI-powered applications. Use when working with OpenAI's Chat Completions API, Python SDK (openai), TypeScript SDK (openai), tool use/function calling, vision/image inputs, streaming responses, DALL-E image generation, Whisper audio transcription, text-to-speech, embeddings, Assistants API, fine-tuning, or any OpenAI API integration task. Triggers on mentions of OpenAI, GPT-4, GPT-4o, GPT-5, o1, o3, o4, DALL-E, Whisper, Sora, or OpenAI SDK usage.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "openai-api" with this command: npx skills add diskd-ai/openai-api/diskd-ai-openai-api-openai-api

OpenAI API

Build AI applications using OpenAI's APIs with Python or TypeScript SDKs.

Quick Start

Installation

# Python
pip install openai

# TypeScript/Node.js
npm install openai

Client Setup

Python:

from openai import OpenAI

client = OpenAI()  # Uses OPENAI_API_KEY env var
# Or: client = OpenAI(api_key="sk-...")

TypeScript:

import OpenAI from 'openai';

const client = new OpenAI();  // Uses OPENAI_API_KEY env var
// Or: new OpenAI({ apiKey: 'sk-...' })

Chat Completions

Basic chat completion:

Python:

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"}
    ]
)
print(response.choices[0].message.content)

TypeScript:

const response = await client.chat.completions.create({
    model: 'gpt-4o',
    messages: [
        { role: 'system', content: 'You are a helpful assistant.' },
        { role: 'user', content: 'Hello!' }
    ]
});
console.log(response.choices[0].message.content);

Streaming

Python:

stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Tell me a story"}],
    stream=True
)
for chunk in stream:
    if chunk.choices[0].delta.content:
        print(chunk.choices[0].delta.content, end="")

TypeScript:

const stream = await client.chat.completions.create({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Tell me a story' }],
    stream: true
});
for await (const chunk of stream) {
    process.stdout.write(chunk.choices[0]?.delta?.content || '');
}

Tool Use / Function Calling

Python:

tools = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Get current weather for a location",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string", "description": "City name"}
            },
            "required": ["location"]
        }
    }
}]

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "What's the weather in Paris?"}],
    tools=tools
)

# Check if tool call requested
if response.choices[0].message.tool_calls:
    tool_call = response.choices[0].message.tool_calls[0]
    # Execute function, then send result back
    messages.append(response.choices[0].message)
    messages.append({
        "role": "tool",
        "tool_call_id": tool_call.id,
        "content": '{"temp": 22, "condition": "sunny"}'
    })

TypeScript:

const tools: OpenAI.ChatCompletionTool[] = [{
    type: 'function',
    function: {
        name: 'get_weather',
        description: 'Get current weather for a location',
        parameters: {
            type: 'object',
            properties: {
                location: { type: 'string', description: 'City name' }
            },
            required: ['location']
        }
    }
}];

const response = await client.chat.completions.create({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: "What's the weather in Paris?" }],
    tools
});

if (response.choices[0].message.tool_calls) {
    const toolCall = response.choices[0].message.tool_calls[0];
    // Execute function, then continue conversation
}

Vision (Image Input)

Python:

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{
        "role": "user",
        "content": [
            {"type": "text", "text": "What's in this image?"},
            {"type": "image_url", "image_url": {"url": "https://example.com/image.jpg"}}
        ]
    }]
)

For base64 images: "url": "data:image/jpeg;base64,{base64_string}"

Structured Outputs (JSON Mode)

Python:

from pydantic import BaseModel

class CalendarEvent(BaseModel):
    name: str
    date: str
    participants: list[str]

response = client.beta.chat.completions.parse(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Create a meeting for tomorrow"}],
    response_format=CalendarEvent
)
event = response.choices[0].message.parsed

TypeScript (with Zod):

import { zodResponseFormat } from 'openai/helpers/zod';
import { z } from 'zod';

const CalendarEvent = z.object({
    name: z.string(),
    date: z.string(),
    participants: z.array(z.string())
});

const response = await client.beta.chat.completions.parse({
    model: 'gpt-4o',
    messages: [{ role: 'user', content: 'Create a meeting for tomorrow' }],
    response_format: zodResponseFormat(CalendarEvent, 'event')
});
const event = response.choices[0].message.parsed;

Models

Chat/Completion Models

ModelBest For
gpt-5.2Latest flagship, best quality
gpt-5.2-proPremium tier for complex tasks
gpt-5Previous flagship, excellent quality
gpt-5-miniCost-effective GPT-5
gpt-5-nanoLightweight GPT-5
gpt-4.1Strong general purpose
gpt-4.1-miniCost-effective GPT-4.1
gpt-4.1-nanoLightweight GPT-4.1
gpt-4oFast, vision support
gpt-4o-miniCost-effective, simpler tasks

Reasoning Models

ModelBest For
o4-miniLatest reasoning, efficient
o3Strong reasoning
o3-miniReasoning with lower cost
o1Complex reasoning, math, code
o1-proPremium reasoning tier

Specialized Models

ModelPurpose
gpt-4o-realtime-previewReal-time voice conversations
gpt-4o-audio-previewAudio input/output
gpt-4o-search-previewWeb search integration
gpt-image-1 / gpt-image-1.5Image understanding
sora-2 / sora-2-proVideo generation
dall-e-3Image generation
whisper-1Audio transcription
tts-1 / tts-1-hdText-to-speech
text-embedding-3-small/largeText embeddings

Feature References

Error Handling

Python:

from openai import APIError, RateLimitError, APIConnectionError

try:
    response = client.chat.completions.create(...)
except RateLimitError:
    # Implement backoff/retry
    pass
except APIConnectionError:
    # Network issue
    pass
except APIError as e:
    print(f"API error: {e.status_code} - {e.message}")

TypeScript:

import OpenAI from 'openai';

try {
    const response = await client.chat.completions.create({...});
} catch (error) {
    if (error instanceof OpenAI.RateLimitError) {
        // Implement backoff/retry
    } else if (error instanceof OpenAI.APIConnectionError) {
        // Network issue
    } else if (error instanceof OpenAI.APIError) {
        console.error(`API error: ${error.status} - ${error.message}`);
    }
}

Common Parameters

ParameterDescription
temperature0-2, lower = deterministic, higher = creative
max_tokensMaximum response length
top_pNucleus sampling alternative to temperature
stopStop sequences to end generation
nNumber of completions to generate

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

redmine-cli

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

code-review

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

assemblyai-cli

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

openai-api

No summary provided by upstream source.

Repository SourceNeeds Review