model-researcher

Add new or custom AI models to opencode.json with verified specifications.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "model-researcher" with this command: npx skills add igorwarzocha/opencode-workflows/igorwarzocha-opencode-workflows-model-researcher

Model Researcher

Add new or custom AI models to opencode.json with verified specifications.

<critical_prohibition>

EXTREMELY STRICT: Banned Legacy Models

The following models MUST NOT be used in any configuration as they are DEPRECATED, REPLACED, or OBSOLETE in the current Q4 2025 market:

  • OpenAI: GPT-o series (o1-mini , o1-preview ), GPT-4o, GPT-4 Turbo.

  • Anthropic: Claude 3.5 series (sonnet-3.5 , etc.), Claude 3 series.

  • Google: Gemini 1.5 series, Gemini 2 series.

  • Meta: Llama 3 series.

The market has moved to GPT-5.2, Claude 4.5, and Gemini 3. Any attempt to configure these models MUST be rejected. </critical_prohibition>

<when_to_use>

  • Model is too new for models.dev

  • Non-standard or custom provider (self-hosted, proxy, enterprise)

  • User heard about a new release and wants to use it immediately

  • Model exists but with non-default configuration needs

</when_to_use>

<question_tool>

Batching: Use the question tool for 2+ related questions. Single questions → plain text.

Syntax: header ≤12 chars, label 1-5 words, add "(Recommended)" to default.

When to ask: Vague model reference, multiple providers available, or specs unverified.

Critical: MUST confirm specs with user before applying model configuration.

</question_tool>

<critical_rule>

Verify Before Configure

MUST NOT hallucinate model specifications. Models have precise identifiers, context limits, and API requirements. A wrong value means broken requests or wasted tokens.

Before ANY configuration:

  • Confirm the model actually exists

  • Find the exact model identifier (API name, not marketing name)

  • Get verified context and output token limits

  • Identify required provider configuration

</critical_rule>

Step 1: Clarify What the User Wants

Ask if unclear:

  • "Which provider will you access this through?" (direct API, OpenRouter, Together, self-hosted, etc.)

  • "Do you have API access already, or do you need setup help?"

Step 2: Research the Model

Use web search to find authoritative sources:

websearch("${MODEL_NAME} API context limit tokens official documentation") websearch("${MODEL_NAME} model ID API identifier ${PROVIDER}")

Priority sources (most to least trustworthy):

  • Official provider documentation/blog posts

  • Provider's API reference or changelog

  • Official GitHub repos or release notes

  • OpenRouter/Together model pages (they list specs)

  • Reputable tech news (for very new announcements)

Red flags - search more if you only find:

  • Reddit speculation

  • Tweets without official confirmation

  • Your own training data (MAY be outdated)

Step 3: Confirm Findings with User

Before touching config, MUST present your research:

I found the following for [MODEL]:

Provider: OpenAI Model ID: gpt-5.2-2025-12-11 Context limit: 400000 tokens (Direct) | 272000 tokens (OAuth) Output limit: 128000 tokens Special options: [e.g., reasoning modes, vision support]

Source: [URL]

Does this match what you expected? Should I add this to your config?

MUST NOT proceed without user confirmation.

Step 4: Apply Configuration

Read the current config first:

read ~/.config/opencode/opencode.json

Then apply using surgical edits. Choose the right pattern:

Pattern A: GPT-5.2 (OpenAI)

OpenAI GPT-5.2 supports specialized reasoning modes via variants (Cycle with ctrl+t ). Note that models configured via OAuth plugins (e.g., opencode-openai-codex-auth ) often have different enforced limits.

{ "provider": { "openai": { "models": { "gpt-5.2": { "limit": { "context": 400000, "output": 128000 }, // Use 272000 for OAuth "variants": { "pro": { "reasoningEffort": "xhigh", "reasoningSummary": "detailed", }, "thinking": { "reasoningEffort": "high" }, "instant": { "reasoningEffort": "low", "textVerbosity": "low" }, }, }, "gpt-5.2-codex": { "limit": { "context": 400000, "output": 128000 }, // Use 272000 for OAuth }, }, }, }, }

Pattern B: Other Labs (DeepSeek, Zhipu, MiniMax, Moonshot)

For labs using OpenAI-compatible or Anthropic-compatible endpoints (Verified Q4 2025):

{ "provider": { "deepseek": { "npm": "@ai-sdk/openai-compatible", "name": "DeepSeek", "options": { "baseURL": "https://api.deepseek.com" }, "models": { "deepseek-v3.2": { "limit": { "context": 128000, "output": 32768 } }, }, }, "zhipu": { "npm": "@ai-sdk/openai-compatible", "name": "Zhipu AI (ZAI)", "options": { "baseURL": "https://api.z.ai/api/paas/v4" }, "models": { "glm-4.7": { "limit": { "context": 200000, "output": 128000 } }, }, }, "minimax": { "npm": "@ai-sdk/openai-compatible", "name": "MiniMax", "options": { "baseURL": "https://api.minimax.chat/v1" }, "models": { "minimax-m2.1": { "limit": { "context": 204800, "output": 128000 } }, }, }, "moonshot": { "npm": "@ai-sdk/openai-compatible", "name": "Moonshot AI (Kimi)", "options": { "baseURL": "https://api.moonshot.cn/v1" }, "models": { "kimi-k2-thinking": { "limit": { "context": 256000, "output": 64000 } }, }, }, }, }

Pattern C: Claude 4.5 Opus (Extended Thinking)

For models that need specific options like Anthropic's extended thinking:

{ "provider": { "anthropic": { "models": { "claude-4-5-opus-thinking": { "id": "claude-4-5-opus-20251124", "name": "Claude 4.5 Opus (Extended Thinking)", "options": { "thinking": { "type": "enabled", "budgetTokens": 32000, }, }, }, }, }, }, }

Step 5: Validate

After editing, remind user to test:

Config updated. To verify it works:

  1. Restart OpenCode or run: opencode
  2. Run /models and select [model name]
  3. Send a test message

If you see errors, check:

  • API key is set (run /connect if needed)
  • Model ID matches provider's documentation exactly
  • Context limits aren't higher than the model actually supports

<research_queries>

Scenario Search Query

New OpenAI model "gpt-5.2" site:openai.com OR site:platform.openai.com

New Anthropic model "claude-4.5" site:anthropic.com API

New Google model "gemini 3" site:ai.google.dev context window

OpenRouter availability "${MODEL}" site:openrouter.ai

Together AI availability "${MODEL}" site:together.ai

Self-hosted specs "${MODEL}" context length output tokens huggingface

</research_queries>

<spec_checklist>

Before configuring, ensure you have:

  • Model ID: Exact API identifier (not marketing name)

  • Context limit: Maximum input tokens

  • Output limit: Maximum output/completion tokens

  • Provider: Which service hosts it

  • Base URL: For custom providers only

  • Special options: Vision, reasoning modes, thinking budgets

  • Availability: Is it actually accessible (not waitlist-only)?

</spec_checklist>

<handling_uncertainty>

If you cannot verify specifications:

  • Be honest: "I couldn't find official documentation for the exact context limit."

  • Provide best guess with source: "Based on [source], it appears to be 128k, but this isn't confirmed."

  • Suggest conservative defaults: "I'll configure with 100k context as a safe starting point. You can increase it once you confirm the actual limit."

  • Recommend checking: "Try the provider's /models endpoint or documentation for exact specs."

</handling_uncertainty>

References

  • references/provider-patterns.md
  • Common provider configuration examples

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

powerpoint

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

create-opencode-plugin

No summary provided by upstream source.

Repository SourceNeeds Review
Coding

skill-creator

No summary provided by upstream source.

Repository SourceNeeds Review