23.3 C
Canberra
Wednesday, February 25, 2026

Agent Lightning: Including reinforcement studying to AI brokers with out code rewrites


Three white line icons on a blue-to-purple gradient background: the first icon shows a simple flowchart with connected squares and a diamond, the second icon shows a network of interconnected circles, and the third icon shows three user profile symbols linked together.

AI brokers are reshaping software program growth, from writing code to finishing up complicated directions. But LLM-based brokers are susceptible to errors and infrequently carry out poorly on difficult, multi-step duties. Reinforcement studying (RL) is an strategy the place AI techniques study to make optimum selections by receiving rewards or penalties for his or her actions, enhancing by means of trial and error. RL can assist brokers enhance, however it sometimes requires builders to extensively rewrite their code. This discourages adoption, though the info these brokers generate might considerably enhance efficiency by means of RL coaching.

To handle this, a analysis staff from Microsoft Analysis Asia – Shanghai has launched Agent Lightning. This open-source (opens in new tab) framework makes AI brokers trainable by means of RL by separating how brokers execute duties from mannequin coaching, permitting builders so as to add RL capabilities with nearly no code modification.

Capturing agent habits for coaching

Agent Lightning converts an agent’s expertise right into a format that RL can use by treating the agent’s execution as a sequence of states and actions, the place every state captures the agent’s standing and every LLM name is an motion that strikes the agent to a brand new state.

This strategy works for any workflow, irrespective of how complicated. Whether or not it includes a number of collaborating brokers or dynamic instrument use, Agent Lightning breaks it down right into a sequence of transitions. Every transition captures the LLM’s enter, output, and reward (Determine 1). This standardized format means the info can be utilized for coaching with none extra steps.

Figure 1: Diagram illustrating Agent Lightning’s unified data interface for a retrieval-augmented generation (RAG) agent. On the left, four states (state₀ to state₃) show the agent’s execution flow, where semantic variables—UserInput, Query, Passages, and Answer—are updated after each component call (LLM or Search). Green blocks represent populated variables; gray blocks indicate empty ones. On the right, the unified data interface converts these transitions into a trajectory format containing prompt, generation, and immediate reward for RL training.
Determine 1. An illustration of Agent Lightning’s standardized format utilizing a retrieval-augmented era (RAG) agent. Left: The total agent workflow, the place the agent’s state updates after every part step. The inexperienced blocks present assigned variables, and the grey blocks point out variables with out content material. Proper: The collected transitions are based mostly on the standardized format for the RL coaching course of, with every transition corresponding to 1 LLM step that comprises its immediate, end result, and instant reward.

Hierarchical reinforcement studying

Conventional RL coaching for brokers that make a number of LLM requests includes stitching collectively all content material into one lengthy sequence after which figuring out which elements must be discovered and which ignored throughout coaching. This strategy is troublesome to implement and may create excessively lengthy sequences that degrade mannequin efficiency.

As an alternative, Agent Lightning’s LightningRL algorithm takes a hierarchical strategy. After a activity completes, a credit score task module determines how a lot every LLM request contributed to the result and assigns it a corresponding reward. These unbiased steps, now paired with their very own reward scores, can be utilized with any current single-step RL algorithm, similar to Proximal Coverage Optimization (PPO) or Group Relative Coverage Optimization (GRPO) (Determine 2).

Figure 2: Comparison of three reinforcement learning approaches for LLM tasks. (a) Single-step GRPO: The model completes the task in one call, and multiple outputs for the same task are compared with associated rewards. (b) Previous multi-step GRPO: The task spans multiple LLM calls, forming trajectories; non-LLM tokens (gray boxes) are ignored during training, and entire multi-step runs are compared. (c) LightningRL: Breaks multi-step runs into individual LLM calls, each including input, context, output, and reward assigned by a credit assignment module. Calls from the same task are grouped for reinforcement.
Determine 2. (a) Single-step GRPO: The LLM completes the duty in a single name. A number of responses for a similar activity are in comparison with decide how strongly every must be bolstered. (b) Earlier multi-step GRPO: The duty includes a number of LLM calls. A number of multi-step runs of the identical activity are in contrast, with non-LLM generated tokens (gray packing containers) ignored throughout coaching. (c) LightningRL: The multi-step run is split into particular person LLM calls. Calls from the identical activity are in comparison with decide how strongly every must be bolstered. Every name consists of its enter, context, output, and reward, assigned by the credit score task module.

This design provides a number of advantages. It stays absolutely appropriate with broadly used single-step RL algorithms, permitting current coaching strategies to be utilized with out modification. Organizing information as a sequence of unbiased transitions lets builders flexibly assemble the LLM enter as wanted, supporting complicated behaviors like brokers that use a number of instruments or work with different brokers. Moreover, by retaining sequences brief, the strategy scales cleanly and retains coaching environment friendly.

Agent Lightning as middleware

Agent Lightning serves as middleware between RL algorithms and agent environments, offering modular elements that allow scalable RL by means of standardized protocols and well-defined interfaces.

An agent runner manages the brokers as they full duties. It distributes work and collects and shops the outcomes and progress information. It operates individually from the LLMs, enabling them to run on totally different sources and scale to help a number of brokers operating concurrently.

An algorithm trains the fashions and hosts the LLMs used for inference and coaching. It orchestrates the general RL cycle, managing which duties are assigned, how brokers full them, and the way fashions are up to date based mostly on what the brokers study. It sometimes runs on GPU sources and communicates with the agent runner by means of shared protocols.

The LightningStore (opens in new tab) serves because the central repository for all information exchanges inside the system. It gives standardized interfaces and a shared format, guaranteeing that the totally different elements can work collectively and enabling the algorithm and agent runner to speak successfully.

Figure 3: Diagram showing the architecture of Agent Lightning (AGL). On the left, the AGL Algorithm block includes an inference engine (e.g., vLLM), an algorithm iteration loop, and an adapter for trainable data and weights update. In the center, the AGL Core contains LightningStore, which manages tasks, resources, spans, and LLM calls. On the right, the AGL Agent Runner & Tracer includes a user-defined agent using OpenAI chat completion and agl.emit(). Arrows indicate flows of prompts, responses, tasks, resources, spans, and datasets between components, with roles for algorithm researchers and agent developers highlighted.
Determine 3. The Agent Lightning framework

All RL cycles comply with two steps: (1) Agent Lightning collects agent execution information (referred to as “spans”) and retailer them within the information retailer; (2) it then retrieves the required information and sends it to the algorithm for coaching. By means of this design, the algorithm can delegate duties asynchronously to the agent runner, which completes them and experiences the outcomes again (Determine 4).

Figure 4: Diagram of the training loop in Agent Lightning. The central element is ‘Trainer,’ with arrows forming a cycle between three components: Agent on the left, Algorithm on the right, and Trainer in the middle. The top arrow labeled ‘Tasks’ flows from Algorithm to Agent, while the bottom arrow labeled ‘Spans’ flows from Agent to Algorithm. ‘Prompt Templates’ is noted above the cycle, indicating its role in task generation.
Determine 4. Agent Lightning’s RL cycle

One key benefit of this strategy is its algorithmic flexibility. The system makes it simple for builders to customise how brokers study, whether or not they’re defining totally different rewards, capturing intermediate information, or experimenting with totally different coaching approaches.

One other benefit is useful resource effectivity. Agentic RL techniques are complicated, integrating agentic techniques, LLM inference engines, and coaching frameworks. By separating these elements, Agent Lightning makes this complexity manageable and permits every half to be optimized independently

A decoupled design permits every part to make use of the {hardware} that fits it finest. The agent runner can use CPUs whereas mannequin coaching makes use of GPUs. Every part also can scale independently, enhancing effectivity and making the system simpler to keep up. In apply, builders can maintain their current agent frameworks and swap mannequin calls to the Agent Lightning API with out altering their agent code (Determine 5).

Figure 5: Side-by-side code comparison showing agent implementation before and after integrating Agent Lightning. The left panel (dark background) displays the original agent code written by the developer, including logic for LLM calls, tool usage, and reward assignment. The right panel (light background) shows the modified version using Agent Lightning, where most of the agent logic remains unchanged but includes additional imports and calls to Agent Lightning components such as agl.PromptTemplate, agl.emit(), and agl.Trainer for training and credit assignment. A stylized lightning icon is centered between the two panels.
Determine 5. On the left, the developer implements the agent code. On the underside proper is the code required for Agent Lightning. The principle physique of the agent code is unchanged.

Analysis throughout three real-world eventualities

Agent Lightning was examined on three distinct duties, reaching constant efficiency enhancements throughout all eventualities (Determine 6):

Textual content-to-SQL (LangChain): In a system with three brokers dealing with SQL era, checking, and rewriting, Agent Lightning concurrently optimized two of them, considerably enhancing the accuracy of producing executable SQL from pure language queries.

Retrieval-augmented era (OpenAI Brokers SDK implementation): On the multi-hop question-answering dataset MuSiQue, which requires querying a big Wikipedia database, Agent Lightning helped the agent generate more practical search queries and cause higher from retrieved content material.

Mathematical QA and gear use (AutoGen implementation): For complicated math issues, Agent Lightning educated LLMs to extra precisely decide when and how one can name the instrument and combine the outcomes into its reasoning, rising accuracy.

Figure 6: Figure with six line charts showing reward curves across three evaluation scenarios (Spider, MuSiQue, Calculator) for train and test splits. Top row: Train Rewards on Spider, MuSiQue, and Calculator—each plot shows a blue line with noisy upward trend over steps, indicating increasing rewards; Spider and Calculator rise faster with more variance, MuSiQue climbs more gradually. Bottom row: Test Rewards on Spider, MuSiQue, and Calculator—each plot shows a blue line that increases and then stabilizes at higher rewards; Calculator reaches near-plateau earliest, Spider shows steady gains with minor fluctuations, MuSiQue improves more slowly. All plots use ‘Steps’ on the x‑axis and ‘Rewards’ on the y‑axis, with a legend labeled ‘ours’ and light gridlines.
Determine 6. Reward curves throughout the three analysis eventualities

Enabling steady agent enchancment

By simplifying RL integration, Agent Lightning could make it simpler for builders to construct, iterate, and deploy high-performance brokers. We plan to broaden Agent Lightning’s capabilities to incorporate computerized immediate optimization and extra RL algorithms.

The framework is designed to function an open platform the place any AI agent can enhance by means of real-world apply. By bridging current agentic techniques with reinforcement studying, Agent Lightning goals to assist create AI techniques that study from expertise and enhance over time.



Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

[td_block_social_counter facebook="tagdiv" twitter="tagdivofficial" youtube="tagdiv" style="style8 td-social-boxed td-social-font-icons" tdc_css="eyJhbGwiOnsibWFyZ2luLWJvdHRvbSI6IjM4IiwiZGlzcGxheSI6IiJ9LCJwb3J0cmFpdCI6eyJtYXJnaW4tYm90dG9tIjoiMzAiLCJkaXNwbGF5IjoiIn0sInBvcnRyYWl0X21heF93aWR0aCI6MTAxOCwicG9ydHJhaXRfbWluX3dpZHRoIjo3Njh9" custom_title="Stay Connected" block_template_id="td_block_template_8" f_header_font_family="712" f_header_font_transform="uppercase" f_header_font_weight="500" f_header_font_size="17" border_color="#dd3333"]
- Advertisement -spot_img

Latest Articles