AIN — AI Node Plugin for OpenClaw
Bridges the AIN provider registry, intelligent routing engine, and execution layer into the OpenClaw ecosystem.
What it does
- Provider bridging — All AIN-configured providers (LM Studio, Ollama, OpenAI, vLLM, etc.) are automatically exposed to OpenClaw as
ain:<name>providers - LLM tools — Two agent tools:
ain_run(prompt execution with routing, structured output, fallback chains) andain_classify(task type and complexity classification) - Routing hook —
before_model_resolvehook uses AIN's intelligent routing engine to automatically select the best model for each task based on policies and task classification
Installation
npm install openclaw-plugin-ain
Requires @felipematos/ain-cli (installed as a dependency).
Configuration
In your OpenClaw config:
{
"plugins": {
"ain": {
"enableRouting": true,
"routingPolicy": "local-first",
"exposeTools": true
}
}
}
Options
| Option | Type | Default | Description |
|---|---|---|---|
configPath | string | ~/.ain/config.yaml | Path to AIN config file |
enableRouting | boolean | true | Enable intelligent model routing |
routingPolicy | string | — | Named routing policy from AIN policies.yaml |
exposeTools | boolean | true | Expose ain_run and ain_classify tools to agents |
Tools
ain_run
Execute an LLM prompt through AIN's execution engine with full support for routing, structured output, and fallback chains.
Parameters:
prompt(string, required) — The prompt to executeprovider(string) — Provider namemodel(string) — Model ID or aliasjsonMode(boolean) — Request JSON outputschema(object) — JSON Schema for output validationsystem(string) — System prompttemperature(number) — Sampling temperature
Returns: { output, provider, model, usage, parsedOutput }
ain_classify
Classify a prompt's task type and estimate its complexity.
Parameters:
prompt(string, required) — The prompt to classify
Returns: { taskType, complexity }
Task types: classification, extraction, generation, reasoning, unknown
Complexity: low, medium, high
Routing
When enableRouting is true, the plugin registers a before_model_resolve hook that analyzes incoming prompts and selects the optimal model based on:
- Task classification (classification/extraction → fast tier, generation → general tier, reasoning → reasoning tier)
- Routing policies defined in
~/.ain/policies.yaml - Model tags and tier configuration
Requirements
- Node.js >= 18
- AIN configured with at least one provider (
ain config init && ain providers add ...) - OpenClaw >= 1.0.0
License
MIT