oktk

LLM Token Optimizer - Reduce AI API costs by 60-90%. Compresses CLI outputs (git, docker, kubectl) before sending to GPT-4/Claude. AI auto-learning included. By Buba Draugelis πŸ‡±πŸ‡Ή

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "oktk" with this command: npx skills add satnamra/oktk

oktk - CLI Output Compressor for LLMs

The Problem

When you run commands through an AI assistant, the full output goes into the LLM context:

$ git status
# Returns 60+ lines, ~800 tokens
# Your AI reads ALL of it, you pay for ALL of it

Every token costs money. Verbose outputs waste your context window.

The Solution

oktk sits between your commands and the LLM, compressing outputs intelligently:

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”     β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚ Command  β”‚ ──► β”‚   oktk   β”‚ ──► β”‚   LLM    β”‚
β”‚ (800 tk) β”‚     β”‚ compress β”‚     β”‚ (80 tk)  β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜     β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                      β”‚
                 90% SAVED

When Does It Work?

Automatically when you run supported commands through OpenClaw:

CommandWhat oktk doesSavings
git statusShows only: branch, ahead/behind, file counts90%
git logOne line per commit: hash + message + author85%
git diffSummary: X files, +Y/-Z lines, file list80%
npm testJust: βœ… passed or ❌ failed + count98%
ls -laGroups by type, shows sizes, skips details83%
curlStatus code + key headers + truncated body97%
grepMatch count + first N matches80%
docker psContainer list: name, image, status85%
docker logsLast N lines + error count90%
kubectl get podsPod status summary with counts85%
kubectl logsLast N lines + error/warning counts90%
Any commandAI learns patterns automatically (optional)~70%

Concrete Example

Before oktk (800 tokens sent to LLM):

On branch main
Your branch is ahead of 'origin/main' by 3 commits.
  (use "git push" to publish your local commits)

Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
        modified:   src/components/Button.jsx
        modified:   src/components/Header.jsx
        modified:   src/utils/format.js
        modified:   src/utils/validate.js
        modified:   package.json
        modified:   package-lock.json

Untracked files:
  (use "git add <file>..." to include in what will be committed)
        src/components/Footer.jsx
        src/components/Sidebar.jsx
        tests/Button.test.js

no changes added to commit (use "git add" and/or "git commit -a")

After oktk (80 tokens sent to LLM):

πŸ“ main
↑ Ahead 3 commits
✏️  Modified: 6
❓ Untracked: 3

Same information. 90% fewer tokens. Same cost savings.

How It Works Technically

  1. Intercepts command output after execution
  2. Detects command type (git? npm? ls?)
  3. Applies specialized filter for that command
  4. Extracts only essential information
  5. Caches results (same command = instant, no reprocessing)

Safety First

oktk never breaks your workflow:

Try specialized filter
    ↓ fails?
Try basic filter  
    ↓ fails?
Return raw output (same as without oktk)

Worst case: You get normal output Best case: 90% token savings

Usage

Global Command (Recommended)

After installation, oktk is available globally:

# Pipe any command through oktk
git status | oktk git status
docker ps | oktk docker ps
kubectl get pods | oktk kubectl get pods

# See your total savings
oktk --stats

# Bypass filter (get raw)
oktk --raw git status

Shell Aliases (Auto-Filter)

Source the aliases file for automatic filtering:

# Add to ~/.zshrc or ~/.bashrc
source ~/.openclaw/workspace/skills/oktk/scripts/oktk-aliases.sh

Then use short aliases:

gst        # git status (filtered)
glog       # git log (filtered)
dps        # docker ps (filtered)
kpods      # kubectl get pods (filtered)

# Universal wrapper - filter ANY command
ok git status
ok docker ps -a
ok kubectl describe pod my-pod

OpenClaw Integration

When using OpenClaw's exec tool, pipe outputs through oktk:

# In your prompts, ask OpenClaw to:
git status | oktk git status
docker logs container | oktk docker logs

# Or use the 'ok' wrapper (if aliases sourced):
ok git diff HEAD~5

Note: OpenClaw doesn't have a built-in exec output transformer yet. The recommended approach is:

  1. Source the aliases file in your shell
  2. Use ok <command> wrapper for any command
  3. Or manually pipe: <command> | oktk <command>

Real Savings Example

After 1 week of normal usage:

πŸ“Š Token Savings
━━━━━━━━━━━━━━━━
Commands filtered: 1,247
Tokens saved:      456,789 (78%)

πŸ’° At $0.01/1K tokens = $4.57 saved

Installation

Already included in OpenClaw workspace, or:

clawhub install oktk

Made with ❀️ in Lithuania πŸ‡±πŸ‡Ή

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Web3

ANVX - Token Economy Intel

Track and optimize AI API spending across 19 providers with live pricing and 6 optimization modules.

Registry SourceRecently Updated
2071Profile unavailable
Coding

Slash Tokens

CLI proxy that reduces LLM token consumption by 60-90%. Prefix any dev command with 'rtk' to get filtered, compact output. Use for all Bash commands to save...

Registry SourceRecently Updated
1250Profile unavailable
Coding

Anthropic Usage

Format for Anthropic API usage reports generated by anthropic-report.py. Use this skill when creating, modifying, or discussing the format of Anthropic proxy...

Registry SourceRecently Updated
960Profile unavailable
Coding

Overkill Token Optimizer

Optimize and manage session tokens for workspace memory with commands to check usage, reset, index, search, and compress tokens.

Registry SourceRecently Updated
4660Profile unavailable