oraclaw-cmaes

CMA-ES continuous optimization for AI agents. State-of-the-art derivative-free optimizer. 10-100x more sample-efficient than genetic algorithms on continuous problems. Hyperparameter tuning, portfolio optimization, parameter calibration.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "oraclaw-cmaes" with this command: npx skills add whatsonyourmind/oraclaw-cmaes

OraClaw CMA-ES — SOTA Continuous Optimizer for Agents

You are an optimization agent that uses CMA-ES (Covariance Matrix Adaptation Evolution Strategy) — the gold standard for derivative-free continuous optimization. Used by Google for hyperparameter tuning.

When to Use This Skill

Use when the user or agent needs to:

  • Optimize continuous parameters (learning rates, weights, thresholds)
  • Tune hyperparameters for ML models
  • Calibrate model parameters to match observed data
  • Find optimal continuous allocations (portfolio weights, pricing)
  • Any black-box optimization where you can evaluate f(x) but don't have gradients

Why CMA-ES vs. Genetic Algorithm?

  • CMA-ES: 10-100x more sample-efficient on smooth continuous problems. Learns the correlation structure of the search space. SOTA for continuous optimization.
  • GA (oraclaw-evolve): Better for discrete/combinatorial problems, multi-objective Pareto frontiers.
  • Use CMA-ES for continuous. Use GA for discrete.

Tool: optimize_cmaes

{
  "dimension": 3,
  "initialMean": [0.5, 0.5, 0.5],
  "initialSigma": 0.3,
  "maxIterations": 200,
  "objectiveWeights": [2.0, 1.5, 1.0]
}

Returns: bestSolution, bestFitness, iterations, evaluations, converged, executionTimeMs.

Rules

  1. dimension = number of continuous parameters to optimize
  2. initialMean = starting point (center of search). If unknown, use 0.5 for normalized params.
  3. initialSigma = initial step size (0.1-0.5 typical). Too small = slow convergence, too large = unstable.
  4. CMA-ES MINIMIZES the objective. To maximize, negate the weights.
  5. Converges in O(dimension^2) iterations typically. Dimension 10 needs ~100-300 iterations.

Pricing

$0.10 per optimization. USDC on Base via x402. Free tier: 1,000 calls/month.

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Automation

Eternal Haven Lore Pack

Eternal Haven Chronicles lore + mythic persona pack. Use when the agent needs deep narrative context, character arcs, and metaphysical structure from the 4 Eternal Haven books to speak in a more poetic, mythic, or Champion-aligned voice while staying anchored in real events and consistent rules.

Registry SourceRecently Updated
Automation

Creator Alpha Feed

Collect and rank daily AI content for creator-focused publishing workflows. Use when users ask for AI topic scouting, KOL tracking (especially X/Twitter), practical tutorial picks, industry updates, or automated Feishu/Obsidian briefing pushes with configurable templates and time windows.

Registry SourceRecently Updated
1.7K0rotbit
Automation

Evolution Api Go - Evo Go

Complete WhatsApp automation via Evolution API Go v3 - instances, messages (text/media/polls/carousels), groups, contacts, chats, communities, newsletters, and real-time webhooks

Registry SourceRecently Updated
Automation

macOS

macOS system administration, command-line differences from Linux, and automation best practices.

Registry SourceRecently Updated