Loop Control
You can control both the execution flow and the settings at each step of the agent loop. The loop continues until:
- A finish reasoning other than tool-calls is returned, or
- A tool that is invoked does not have an execute function, or
- A tool call needs approval, or
- A stop condition is met
The AI SDK provides built-in loop control through two parameters: stopWhen for defining stopping conditions and prepareStep for modifying settings (model, tools, messages, and more) between steps.
Stop Conditions
The stopWhen parameter controls when to stop execution when there are tool results in the last step. By default, agents stop after 20 steps using stepCountIs(20).
When you provide stopWhen, the agent continues executing after tool calls until a stopping condition is met. When the condition is an array, execution stops when any of the conditions are met.
Use Built-in Conditions
The AI SDK provides several built-in stopping conditions:
import { ToolLoopAgent, stepCountIs } from 'ai';
const agent = new ToolLoopAgent({ model: 'openai/gpt-4o', tools: { // your tools }, stopWhen: stepCountIs(20), // Default state: stop after 20 steps maximum});
const result = await agent.generate({ prompt: 'Analyze this dataset and create a summary report',});Combine Multiple Conditions
Combine multiple stopping conditions. The loop stops when it meets any condition:
import { ToolLoopAgent, stepCountIs, hasToolCall } from 'ai';
const agent = new ToolLoopAgent({ model: 'openai/gpt-4o', tools: { // your tools }, stopWhen: [ stepCountIs(20), // Maximum 20 steps hasToolCall('someTool'), // Stop after calling 'someTool' ],});
const result = await agent.generate({ prompt: 'Research and analyze the topic',});Create Custom Conditions
Build custom stopping conditions for specific requirements:
import { ToolLoopAgent, StopCondition, ToolSet } from 'ai';
const tools = { // your tools} satisfies ToolSet;
const hasAnswer: StopCondition<typeof tools> = ({ steps }) => { // Stop when the model generates text containing "ANSWER:" return steps.some(step => step.text?.includes('ANSWER:')) ?? false;};
const agent = new ToolLoopAgent({ model: 'openai/gpt-4o', tools, stopWhen: hasAnswer,});
const result = await agent.generate({ prompt: 'Find the answer and respond with "ANSWER: [your answer]"',});Custom conditions receive step information across all steps:
const budgetExceeded: StopCondition<typeof tools> = ({ steps }) => { const totalUsage = steps.reduce( (acc, step) => ({ inputTokens: acc.inputTokens + (step.usage?.inputTokens ?? 0), outputTokens: acc.outputTokens + (step.usage?.outputTokens ?? 0), }), { inputTokens: 0, outputTokens: 0 }, );
const costEstimate = (totalUsage.inputTokens * 0.01 + totalUsage.outputTokens * 0.03) / 1000; return costEstimate > 0.5; // Stop if cost exceeds $0.50};Prepare Step
The prepareStep callback runs before each step in the loop and defaults to the initial settings if you don't return any changes. Use it to modify settings, manage context, or implement dynamic behavior based on execution history.
Dynamic Model Selection
Switch models based on step requirements:
import { ToolLoopAgent } from 'ai';
const agent = new ToolLoopAgent({ model: 'openai/gpt-4o-mini', // Default model tools: { // your tools }, prepareStep: async ({ stepNumber, messages }) => { // Use a stronger model for complex reasoning after initial steps if (stepNumber > 2 && messages.length > 10) { return { model: 'openai/gpt-4o', }; } // Continue with default settings return {}; },});
const result = await agent.generate({ prompt: '...',});Context Management
Manage growing conversation history in long-running loops:
import { ToolLoopAgent } from 'ai';
const agent = new ToolLoopAgent({ model: 'openai/gpt-4o', tools: { // your tools }, prepareStep: async ({ messages }) => { // Keep only recent messages to stay within context limits if (messages.length > 20) { return { messages: [ messages[0], // Keep system message ...messages.slice(-10), // Keep last 10 messages ], }; } return {}; },});
const result = await agent.generate({ prompt: '...',});Tool Selection
Control which tools are available at each step:
import { ToolLoopAgent } from 'ai';
const agent = new ToolLoopAgent({ model: 'openai/gpt-4o', tools: { search: searchTool, analyze: analyzeTool, summarize: summarizeTool, }, prepareStep: async ({ stepNumber, steps }) => { // Search phase (steps 0-2) if (stepNumber <= 2) { return { activeTools: ['search'], toolChoice: 'required', }; }
// Analysis phase (steps 3-5) if (stepNumber <= 5) { return { activeTools: ['analyze'], }; }
// Summary phase (step 6+) return { activeTools: ['summarize'], toolChoice: 'required', }; },});
const result = await agent.generate({ prompt: '...',});You can also force a specific tool to be used:
prepareStep: async ({ stepNumber }) => { if (stepNumber === 0) { // Force the search tool to be used first return { toolChoice: { type: 'tool', toolName: 'search' }, }; }
if (stepNumber === 5) { // Force the summarize tool after analysis return { toolChoice: { type: 'tool', toolName: 'summarize' }, }; }
return {};};Message Modification
Transform messages before sending them to the model:
import { ToolLoopAgent } from 'ai';
const agent = new ToolLoopAgent({ model: 'openai/gpt-4o', tools: { // your tools }, prepareStep: async ({ messages, stepNumber }) => { // Summarize tool results to reduce token usage const processedMessages = messages.map(msg => { if (msg.role === 'tool' && msg.content.length > 1000) { return { ...msg, content: summarizeToolResult(msg.content), }; } return msg; });
return { messages: processedMessages }; },});
const result = await agent.generate({ prompt: '...',});Access Step Information
Both stopWhen and prepareStep receive detailed information about the current execution:
prepareStep: async ({ model, // Current model configuration stepNumber, // Current step number (0-indexed) steps, // All previous steps with their results messages, // Messages to be sent to the model}) => { // Access previous tool calls and results const previousToolCalls = steps.flatMap(step => step.toolCalls); const previousResults = steps.flatMap(step => step.toolResults);
// Make decisions based on execution history if (previousToolCalls.some(call => call.toolName === 'dataAnalysis')) { return { toolChoice: { type: 'tool', toolName: 'reportGenerator' }, }; }
return {};},Manual Loop Control
For scenarios requiring complete control over the agent loop, you can use AI SDK Core functions (generateText and streamText) to implement your own loop management instead of using stopWhen and prepareStep. This approach provides maximum flexibility for complex workflows.
Implementing a Manual Loop
Build your own agent loop when you need full control over execution:
import { generateText, ModelMessage } from 'ai';
const messages: ModelMessage[] = [{ role: 'user', content: '...' }];
let step = 0;const maxSteps = 10;
while (step < maxSteps) { const result = await generateText({ model: 'openai/gpt-4o', messages, tools: { // your tools here }, });
messages.push(...result.response.messages);
if (result.text) { break; // Stop when model generates text }
step++;}This manual approach gives you complete control over:
- Message history management
- Step-by-step decision making
- Custom stopping conditions
- Dynamic tool and model selection
- Error handling and recovery