agent-lightning

Microsoft Research's agent training framework. Optimizes AI agents with Reinforcement Learning, Automatic Prompt Optimization, and Supervised Fine-tuning. Zero code change required. Works with LangChain, AutoGen, CrewAI, OpenAI Agent SDK.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "agent-lightning" with this command: npx skills add olmmlo-cmd/agent-lightning

Agent Lightning ⚡

Microsoft Research's agent training framework. Turn your AI agents into optimizable beasts with (almost) zero code changes.

Core Features

  • 🔌 Universal Compatibility: Works with LangChain, OpenAI Agent SDK, AutoGen, CrewAI, Microsoft Agent Framework, or plain Python OpenAI
  • 🎯 Selective Optimization: Optimize one or more agents in a multi-agent system
  • 🧠 Multiple Algorithms: Reinforcement Learning (RL), Automatic Prompt Optimization (APO), Supervised Fine-tuning (SFT)
  • ⚡ Zero Code Change: Add agl.emit_xxx() helpers or use tracer — your agent keeps running as usual

Installation

pip install agentlightning

For latest nightly build:

pip install --upgrade --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple/ --pre agentlightning

Quick Start

1. Instrument Your Agent

Option A: Add emit helpers (recommended)

import agentlightning as agl

# In your agent's tool calls
response = agl.emit_tool_call(
    model=model,
    messages=messages,
    tools=tools,
    context={"task": "search"}
)

Option B: Use tracer (zero code change)

from agentlightning import tracer

# Wrap your agent with tracer
with tracer.trace("my-agent", input_data):
    result = your_agent.run(user_query)

2. Create Training Config

# config.yaml
agent:
  name: "my-agent"
  type: "openai"  # openai, langchain, autogen, crewai

training:
  algorithm: "grpo"  # grpo, apo, sft, rloo
  episodes: 100
  batch_size: 16
  
environment:
  eval_tasks:
    - "math"
    - "coding"
    - "reasoning"

3. Run Training

agent-lightning train --config config.yaml

Algorithms

AlgorithmUse CaseDescription
GRPOGeneral RLGroup Relative Policy Optimization — stable, works well for most agents
APOPrompt TuningAutomatic Prompt Optimization — improves system prompts
SFTSupervised Fine-tuningSupervised Fine-tuning with preference data
RLOOLong-horizonRLOO for tasks with sparse rewards

Usage Commands

agent-lightning train

Train your agent with configured algorithm.

agent-lightning eval

Evaluate agent on benchmark tasks.

agent-lightning export

Export trained model/prompts for deployment.

agent-lightning serve

Launch serving endpoint for trained agent.

Example: SQL Agent Training

See full example: Train SQL Agent with RL

from agentlightning import Agent, RLConfig, GRPOTrainer

# 1. Define your agent
sql_agent = Agent(
    name="sql-agent",
    system_prompt="You are a SQL expert...",
    tools=[execute_sql, query_schema]
)

# 2. Configure RL training
config = RLConfig(
    algorithm="grpo",
    episodes=500,
    learning_rate=1e-4
)

# 3. Train
trainer = GRPOTrainer(config=config)
trainer.train(sql_agent, eval_tasks=["sql-generation"])

Integration with Clawdbot

Environment Variables

# Required for training
export OPENAI_API_KEY="sk-..."

# Optional: for remote storage
export AGL_STORAGE="s3://my-bucket/agent-lightning/"

Python API

from agentlightning import LightningStore, GRPOTrainer

# LightningStore keeps tasks, resources, and traces in sync
store = LightningStore()

# Read traces, learn, and update prompts
trainer = GRPOTrainer(store=store)
trainer.train(agent=my_agent)

Monitoring Training

# Launch dashboard
agent-lightning dashboard --port 8080

# View logs
tail -f ~/.agent-lightning/logs/training.log

Best Practices

  1. Start Small: Begin with 10-50 episodes to verify setup
  2. Define Clear Rewards: Design reward functions that match your goal
  3. Use Evaluation Tasks: Always eval on held-out tasks
  4. Checkpoint Frequently: Save model every N episodes
  5. Monitor Convergence: Watch loss curves in dashboard

Resources

Citation

If you use Agent Lightning in research:

@misc{luo2025agentlightningtrainai,
  title={Agent Lightning: Train ANY AI Agents with Reinforcement Learning},
  author={Xufang Luo and Yuge Zhang and Zhiyuan He and Zilong Wang and Siyun Zhao and Dongsheng Li and Luna K. Qiu and Yuqing Yang},
  year={2025},
  eprint={2508.03680},
  archivePrefix={arXiv},
  primaryClass={cs.AI}
}

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

Coding

CXM: Neural Memory for Agents

Use this skill when you need to understand the architecture of a codebase, perform semantic searches across files, map dependencies before refactoring, or in...

Registry SourceRecently Updated
2560Profile unavailable
General

Outlook Mcp

Production-grade MCP server for personal Outlook (Outlook.com / Hotmail / Live). 54 typed Graph tools across mail, calendar, contacts, to-do, drafts, attachm...

Registry SourceRecently Updated
2551Profile unavailable
Research

模型蒸馏大师

模型蒸馏大师:将大模型能力迁移到小模型的完整工作流。 支持自适应蒸馏、课程学习、能力感知、对抗训练、多维度评估。 触发词:「蒸馏模型」「把XX模型蒸馏到YY」「压缩模型」「做小模型」「教师模型分析」。 默认学生模型:gemma-3-4b-it(4B参数,适合边缘部署)。

Registry SourceRecently Updated
830Profile unavailable
General

Dynamics 365 CRM

Create and update CRM records in Microsoft Dynamics 365 — Opportunities, Leads, Contacts, Accounts, and Tasks via Dataverse Web API with Azure AD OAuth 2.0.

Registry SourceRecently Updated
1090Profile unavailable