grok-api

Call Grok LLM models (grok-3, grok-3-fast, grok-3-mini, etc.) and generate images/videos via an OpenAI-compatible API proxy. Use this skill whenever the user wants to chat with Grok, generate text, create images, or generate videos.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "grok-api" with this command: npx skills add ls

Grok API (via grok2api)

OpenAI-compatible Grok API proxy hosted at https://mc.agaii.org/grok. Use this skill whenever the user asks you to call Grok models, generate text with Grok, or interact with the Grok API.

Overview

  • Base URL: https://mc.agaii.org/grok/v1
  • Auth header: Authorization: Bearer <app_key>
  • Protocol: OpenAI-compatible (drop-in replacement for openai SDK)
  • Admin panel: https://mc.agaii.org/grok/admin/login

The API supports chat completions, streaming, and all standard OpenAI-compatible parameters.

Step 0: Confirm Credentials

Before making any call, confirm the app key is available. The key is used as a Bearer token:

# Verify the key works
curl -s https://mc.agaii.org/grok/v1/models \
  -H "Authorization: Bearer <app_key>"

A successful response lists available Grok models.

Step 1: Choose a Model

Use GET /v1/models to list available models, or pick from the well-known ones:

ModelDescription
grok-3Most capable, best for complex reasoning
grok-3-fastFast variant of Grok-3
grok-3-miniLightweight, efficient
grok-3-mini-fastFastest, lowest latency
grok-2-vision-1212Vision-capable model
curl -s https://mc.agaii.org/grok/v1/models \
  -H "Authorization: Bearer <app_key>" | python3 -m json.tool

Step 2: Chat Completions

Non-streaming

curl -s -X POST https://mc.agaii.org/grok/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <app_key>" \
  -d '{
    "model": "grok-3",
    "messages": [
      {"role": "system", "content": "You are a helpful assistant."},
      {"role": "user", "content": "Explain quantum entanglement in simple terms."}
    ],
    "temperature": 0.7,
    "max_tokens": 1024
  }'

Response shape:

{
  "id": "chatcmpl-...",
  "object": "chat.completion",
  "created": 1234567890,
  "model": "grok-3",
  "choices": [{
    "index": 0,
    "message": {
      "role": "assistant",
      "content": "..."
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 0,
    "completion_tokens": 0,
    "total_tokens": 0
  }
}

Streaming

curl -s -X POST https://mc.agaii.org/grok/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <app_key>" \
  -d '{
    "model": "grok-3",
    "messages": [{"role": "user", "content": "Write a haiku about the sea."}],
    "stream": true
  }'

Streaming returns text/event-stream Server-Sent Events. Each chunk is:

data: {"id":"...","object":"chat.completion.chunk","choices":[{"delta":{"content":"..."}}]}

The stream ends with data: [DONE].

Step 3: Using the Python Script

A ready-to-use Python script is available at scripts/grok_api.py in this skill directory.

Find it:

# If installed via npx skills add
ls .cursor/skills/GrokAgentSkill/scripts/grok_api.py

Run examples

# Simple chat
python scripts/grok_api.py chat "What is the capital of France?"

# Chat with a specific model
python scripts/grok_api.py chat "Write a poem" --model grok-3-mini

# Streaming chat
python scripts/grok_api.py chat "Tell me a story" --stream

# Multi-turn conversation (JSON file)
python scripts/grok_api.py file scripts/example_messages.json

# List models
python scripts/grok_api.py models

Set the app key via environment variable (recommended):

export GROK_API_KEY=<app_key>
python scripts/grok_api.py chat "Hello"

Or pass inline:

python scripts/grok_api.py chat "Hello" --key <app_key>

Step 4: Using the OpenAI Python SDK

The API is a drop-in replacement. Install: pip install openai

from openai import OpenAI

client = OpenAI(
    base_url="https://mc.agaii.org/grok/v1",
    api_key="<app_key>",
)

response = client.chat.completions.create(
    model="grok-3",
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "What is 2 + 2?"},
    ],
)
print(response.choices[0].message.content)

Streaming with SDK

stream = client.chat.completions.create(
    model="grok-3",
    messages=[{"role": "user", "content": "Count to 5 slowly."}],
    stream=True,
)
for chunk in stream:
    delta = chunk.choices[0].delta.content
    if delta:
        print(delta, end="", flush=True)

Async usage

import asyncio
from openai import AsyncOpenAI

client = AsyncOpenAI(
    base_url="https://mc.agaii.org/grok/v1",
    api_key="<app_key>",
)

async def main():
    response = await client.chat.completions.create(
        model="grok-3-fast",
        messages=[{"role": "user", "content": "Hello!"}],
    )
    print(response.choices[0].message.content)

asyncio.run(main())

API Reference

POST /v1/chat/completions

ParameterTypeRequiredDescription
modelstringYesModel ID (e.g. grok-3)
messagesarrayYesArray of {role, content} objects
streambooleanNoEnable SSE streaming (default: false)
temperaturefloatNoSampling temperature 0–2 (default: 1.0)
max_tokensintegerNoMax tokens to generate
top_pfloatNoNucleus sampling (default: 1.0)
frequency_penaltyfloatNo-2.0 to 2.0
presence_penaltyfloatNo-2.0 to 2.0
stopstring/arrayNoStop sequences
nintegerNoNumber of completions
userstringNoEnd-user identifier

Message roles

RoleDescription
systemSets assistant behavior/persona
userHuman turn
assistantPrevious assistant turn (for multi-turn)

GET /v1/models

Returns list of available models. No body required.

Error codes

CodeMeaning
401Invalid or missing API key
429No available tokens / rate limited
500Internal server error

Admin API

All admin endpoints require Authorization: Bearer <app_key>.

List tokens

curl https://mc.agaii.org/grok/v1/admin/tokens \
  -H "Authorization: Bearer <app_key>"

Add / update tokens

curl -X POST https://mc.agaii.org/grok/v1/admin/tokens \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <app_key>" \
  -d '{"ssoBasic": [{"token": "<sso_token>", "status": "active", "quota": 10, "created_at": 0, "use_count": 0, "fail_count": 0, "tags": [], "note": ""}]}'

Refresh token quota

curl -X POST https://mc.agaii.org/grok/v1/admin/tokens/refresh \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <app_key>" \
  -d '{"token": "<sso_token>"}'

Common Patterns

System prompt + user message

messages = [
    {"role": "system", "content": "You are an expert Python developer. Be concise."},
    {"role": "user", "content": "How do I read a file line by line in Python?"},
]

Multi-turn conversation

messages = [
    {"role": "user", "content": "My name is Alice."},
    {"role": "assistant", "content": "Nice to meet you, Alice!"},
    {"role": "user", "content": "What is my name?"},
]

Low-latency use case

Use grok-3-mini-fast with max_tokens set to limit response length:

client.chat.completions.create(
    model="grok-3-mini-fast",
    messages=[{"role": "user", "content": "Summarize in one sentence: ..."}],
    max_tokens=100,
)

Structured output (JSON mode)

client.chat.completions.create(
    model="grok-3",
    messages=[
        {"role": "system", "content": "Always respond with valid JSON only."},
        {"role": "user", "content": "List 3 fruits with their colors as JSON."},
    ],
    temperature=0,
)

Image Generation

Use model grok-imagine-1.0 to generate images from a text prompt.

curl -s -X POST https://mc.agaii.org/grok/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <app_key>" \
  -d '{
    "model": "grok-imagine-1.0",
    "messages": [{"role": "user", "content": "A photorealistic red apple on a wooden table"}]
  }'

The response content will contain an <img> tag with a src URL pointing to the generated image.

Parse the image URL from the response:

import re
content = response["choices"][0]["message"]["content"]
match = re.search(r'src="([^"]+)"', content)
image_url = match.group(1) if match else None

Image editing

Use model grok-imagine-1.0-edit with an existing image URL in the prompt:

-d '{"model": "grok-imagine-1.0-edit", "messages": [{"role": "user", "content": "Make the apple golden. Image: https://..."}]}'

Video Generation

Use model grok-imagine-1.0-video to generate short videos from a text prompt.

Important: The API streams progress updates as SSE chunks (percent complete), then delivers the final video URL embedded in HTML at 100%. You MUST stream the response and parse the final chunk for the video URL.

Step-by-step workflow

1. Submit the request (streaming required):

curl -s -X POST https://mc.agaii.org/grok/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer <app_key>" \
  -d '{
    "model": "grok-imagine-1.0-video",
    "messages": [{"role": "user", "content": "<your video prompt>"}]
  }' > /tmp/video_response.txt

2. Extract the video URL from the streamed output:

grep -oP 'src=\\"https://[^"]+\.mp4\\"' /tmp/video_response.txt | head -1 | grep -oP 'https://[^"\\]+'

Or in Python:

import re, subprocess, json

# Run curl and capture all SSE lines
output = subprocess.check_output([
    "curl", "-s", "-X", "POST",
    "https://mc.agaii.org/grok/v1/chat/completions",
    "-H", "Content-Type: application/json",
    "-H", f"Authorization: Bearer {api_key}",
    "-d", json.dumps({
        "model": "grok-imagine-1.0-video",
        "messages": [{"role": "user", "content": prompt}]
    })
], timeout=300).decode()

# Extract video URL
match = re.search(r'src=\\"(https://[^"\\]+\.mp4)\\"', output)
video_url = match.group(1) if match else None
print("Video URL:", video_url)

3. Download the video:

wget -O output.mp4 "<video_url>"
# or
curl -L -o output.mp4 "<video_url>"

Using the bundled script

GROK_API_KEY=<key> python scripts/grok_api.py video "A glowing figure rotating in the dark"

This will print the video URL when generation completes (typically takes 20–60 seconds).

Progress tracking

During generation the API streams lines like:

正在生成视频中,当前进度1%
正在生成视频中,当前进度25%
...
正在生成视频中,当前进度100%
<video id="video" ...><source src="https://.../generated_video.mp4" ...></video>

The final content after 100% contains the <video> HTML with the mp4 URL and a preview image (poster).

Preview image

The poster/thumbnail URL is also embedded:

poster="https://mc.agaii.org/grok/v1/files/image/.../preview_image.jpg"

Extract with:

match = re.search(r'poster=\\"(https://[^"\\]+)\\"', output)
preview_url = match.group(1) if match else None

Tips for good video prompts

  • Describe subject, motion, camera movement, and lighting explicitly
  • Mention tone (cinematic, futuristic, elegant, etc.)
  • Keep prompts under 300 words
  • Dark backgrounds with glowing elements tend to render well
  • Specify duration feel with pacing words ("slowly pulls back", "quick cut", etc.)

Troubleshooting

SymptomCauseFix
429 "No available tokens"Token pool empty or all coolingAdd more SSO tokens via admin API
401 UnauthorizedWrong app keyCheck Authorization: Bearer <key> header
Connection refusedServer not runningCheck service on port 15125
Blank content in responseModel returned emptyTry again or reduce temperature
Token refresh returns falseSSO token expiredRemove token and add a fresh one

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

text2speech

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

image-to-3d

No summary provided by upstream source.

Repository SourceNeeds Review
Automation

t2i-studio-premium

No summary provided by upstream source.

Repository SourceNeeds Review