token-optimizer

Reduce LLM API costs by optimizing prompts before sending them to cloud providers. Coordinates local and remote code agents with a primary/fallback pipeline.

Safety Notice

This listing is imported from skills.sh public index metadata. Review upstream SKILL.md and repository scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "token-optimizer" with this command: npx skills add mjohngreene/tokenoptimizer/mjohngreene-tokenoptimizer-token-optimizer

TokenOptimizer

When to Use

  • You have a coding task and want to send it to an LLM API with less context (fewer tokens, lower cost).
  • You want automatic fallback from a cheap provider to a more capable one when credits run out.
  • You want local LLM preprocessing to score relevance and compress context before it hits a paid API.
  • You need to stay within a token budget while keeping the most important context.

Quick Start

# Optimize a prompt with default strategies
token_optimizer optimize --input "Fix the bug in auth" --context src/auth.rs

# Analyze cache potential for Anthropic
token_optimizer cache-optimize --task "Add feature" --context types.rs --static-indices "0"

# Launch interactive shell (auto-selects provider)
token_optimizer interactive

# Show current config
token_optimizer config show primary

Capabilities

CapabilityDescription
StripWhitespaceRemove redundant whitespace, preserving code blocks
RemoveCommentsStrip //, /* */, # comments from code
TruncateContextBoundary-aware truncation using tiktoken token counts and priority-based boundary detection (code structure > paragraph > sentence > line > word)
AbbreviateShorten common programming terms in task text
LlmCompressCompress context via local Ollama LLM
RelevanceFilterHybrid keyword + LLM relevance scoring; works without local LLM via keyword-only mode
ExtractSignaturesKeep only function/class/struct signatures
DeduplicateRemove exact, whitespace-normalized, and near-duplicate context items
CachePromptingAnthropic-compatible cache breakpoints for static content
Provider FallbackAutomatic primary -> fallback -> local provider pipeline

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Security

Agentshield Audit

Trust Infrastructure for AI Agents - Like SSL/TLS for agent-to-agent communication. 77 security tests, cryptographic certificates, and Trust Handshake Protoc...

Registry SourceRecently Updated
6600Profile unavailable
General

token-optimizer

No summary provided by upstream source.

Repository SourceNeeds Review
189-d4kooo
General

token-optimizer

No summary provided by upstream source.

Repository SourceNeeds Review
General

token-optimizer

No summary provided by upstream source.

Repository SourceNeeds Review