Tool Calling
As covered under Foundations, tools are objects that can be called by the model to perform a specific task. AI SDK Core tools contain several core elements:
description: An optional description of the tool that can influence when the tool is picked.inputSchema: A Zod schema or a JSON schema that defines the input parameters. The schema is consumed by the LLM, and also used to validate the LLM tool calls.execute: An optional async function that is called with the inputs from the tool call. It produces a value of typeRESULT(generic type). It is optional because you might want to forward tool calls to the client or to a queue instead of executing them in the same process.strict: (optional, boolean) Enables strict tool calling when supported by the provider
You can use the tool helper function to
infer the types of the execute parameters.
The tools parameter of generateText and streamText is an object that has the tool names as keys and the tools as values:
import { z } from 'zod';import { generateText, tool, stopWhen } from 'ai';
const result = await generateText({ model: "anthropic/claude-sonnet-4.5", tools: { weather: tool({ description: 'Get the weather in a location', inputSchema: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => ({ location, temperature: 72 + Math.floor(Math.random() * 21) - 10, }), }), }, stopWhen: stepCountIs(5), prompt: 'What is the weather in San Francisco?',});When a model uses a tool, it is called a "tool call" and the output of the tool is called a "tool result".
Tool calling is not restricted to only text generation. You can also use it to render user interfaces (Generative UI).
Strict Mode
When enabled, language model providers that support strict tool calling will only generate tool calls that are valid according to your defined inputSchema.
This increases the reliability of tool calling.
However, not all schemas may be supported in strict mode, and what is supported depends on the specific provider.
By default, strict mode is disabled. You can enable it per-tool by setting strict: true:
tool({ description: 'Get the weather in a location', inputSchema: z.object({ location: z.string(), }), strict: true, // Enable strict validation for this tool execute: async ({ location }) => ({ // ... }),});Not all providers or models support strict mode. For those that do not, this option is ignored.
Input Examples
You can specify example inputs for your tools to help guide the model on how input data should be structured. When supported by providers, input examples can help when JSON schema itself does not fully specify the intended usage or when there are optional values.
tool({ description: 'Get the weather in a location', inputSchema: z.object({ location: z.string().describe('The location to get the weather for'), }), inputExamples: [ { input: { location: 'San Francisco' } }, { input: { location: 'London' } }, ], execute: async ({ location }) => { // ... },});Only the Anthropic providers supports tool input examples natively. Other providers ignore the setting.
Tool Execution Approval
By default, tools with an execute function run automatically as the model calls them. You can require approval before execution by setting needsApproval:
import { tool } from 'ai';import { z } from 'zod';
const runCommand = tool({ description: 'Run a shell command', inputSchema: z.object({ command: z.string().describe('The shell command to execute'), }), needsApproval: true, execute: async ({ command }) => { // your command execution logic here },});This is useful for tools that perform sensitive operations like executing commands, processing payments, modifying data, and more potentially dangerous actions.
How It Works
When a tool requires approval, generateText and streamText don't pause execution. Instead, they complete and return tool-approval-request parts in the result content. This means the approval flow requires two calls to the model: the first returns the approval request, and the second (after receiving the approval response) either executes the tool or informs the model that approval was denied.
Here's the complete flow:
- Call
generateTextwith a tool that hasneedsApproval: true - Model generates a tool call
generateTextreturns withtool-approval-requestparts inresult.content- Your app requests an approval and collects the user's decision
- Add a
tool-approval-responseto the messages array - Call
generateTextagain with the updated messages - If approved, the tool runs and returns a result. If denied, the model sees the denial and responds accordingly.
Handling Approval Requests
After calling generateText or streamText, check result.content for tool-approval-request parts:
import { type ModelMessage, generateText } from 'ai';
const messages: ModelMessage[] = [ { role: 'user', content: 'Remove the most recent file' },];const result = await generateText({ model: "anthropic/claude-sonnet-4.5", tools: { runCommand }, messages,});
messages.push(...result.response.messages);
for (const part of result.content) { if (part.type === 'tool-approval-request') { console.log(part.approvalId); // Unique ID for this approval request console.log(part.toolCall); // Contains toolName, input, etc. }}To respond, create a tool-approval-response and add it to your messages:
import { type ToolApprovalResponse } from 'ai';
const approvals: ToolApprovalResponse[] = [];
for (const part of result.content) { if (part.type === 'tool-approval-request') { const response: ToolApprovalResponse = { type: 'tool-approval-response', approvalId: part.approvalId, approved: true, // or false to deny reason: 'User confirmed the command', // Optional context for the model }; approvals.push(response); }}
// add approvals to messagesmessages.push({ role: 'tool', content: approvals });Then call generateText again with the updated messages. If approved, the tool executes. If denied, the model receives the denial and can respond accordingly.
When a tool execution is denied, consider adding a system instruction like "When a tool execution is not approved, do not retry it" to prevent the model from attempting the same call again.
Dynamic Approval
You can make approval decisions based on tool input by providing an async function:
const paymentTool = tool({ description: 'Process a payment', inputSchema: z.object({ amount: z.number(), recipient: z.string(), }), needsApproval: async ({ amount }) => amount > 1000, execute: async ({ amount, recipient }) => { return await processPayment(amount, recipient); },});In this example, only transactions over $1000 require approval. Smaller transactions execute automatically.
Tool Execution Approval with useChat
When using useChat, the approval flow is handled through UI state. See Chatbot Tool Usage for details on handling approvals in your UI with addToolApprovalResponse.
Multi-Step Calls (using stopWhen)
With the stopWhen setting, you can enable multi-step calls in generateText and streamText. When stopWhen is set and the model generates a tool call, the AI SDK will trigger a new generation passing in the tool result until there are no further tool calls or the stopping condition is met.
The stopWhen conditions are only evaluated when the last step contains tool
results.
By default, when you use generateText or streamText, it triggers a single generation. This works well for many use cases where you can rely on the model's training data to generate a response. However, when you provide tools, the model now has the choice to either generate a normal text response, or generate a tool call. If the model generates a tool call, its generation is complete and that step is finished.
You may want the model to generate text after the tool has been executed, either to summarize the tool results in the context of the users query. In many cases, you may also want the model to use multiple tools in a single response. This is where multi-step calls come in.
You can think of multi-step calls in a similar way to a conversation with a human. When you ask a question, if the person does not have the requisite knowledge in their common knowledge (a model's training data), the person may need to look up information (use a tool) before they can provide you with an answer. In the same way, the model may need to call a tool to get the information it needs to answer your question where each generation (tool call or text generation) is a step.
Example
In the following example, there are two steps:
- Step 1
- The prompt
'What is the weather in San Francisco?'is sent to the model. - The model generates a tool call.
- The tool call is executed.
- The prompt
- Step 2
- The tool result is sent to the model.
- The model generates a response considering the tool result.
import { z } from 'zod';import { generateText, tool, stepCountIs } from 'ai';
const { text, steps } = await generateText({ model: "anthropic/claude-sonnet-4.5", tools: { weather: tool({ description: 'Get the weather in a location', inputSchema: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => ({ location, temperature: 72 + Math.floor(Math.random() * 21) - 10, }), }), }, stopWhen: stepCountIs(5), // stop after a maximum of 5 steps if tools were called prompt: 'What is the weather in San Francisco?',});streamText in a similar way.Steps
To access intermediate tool calls and results, you can use the steps property in the result object
or the streamText onFinish callback.
It contains all the text, tool calls, tool results, and more from each step.
Example: Extract tool results from all steps
import { generateText } from 'ai';
const { steps } = await generateText({ model: "anthropic/claude-sonnet-4.5", stopWhen: stepCountIs(10), // ...});
// extract all tool calls from the steps:const allToolCalls = steps.flatMap(step => step.toolCalls);onStepFinish callback
When using generateText or streamText, you can provide an onStepFinish callback that
is triggered when a step is finished,
i.e. all text deltas, tool calls, and tool results for the step are available.
When you have multiple steps, the callback is triggered for each step.
import { generateText } from 'ai';
const result = await generateText({ // ... onStepFinish({ text, toolCalls, toolResults, finishReason, usage }) { // your own logic, e.g. for saving the chat history or recording usage },});prepareStep callback
The prepareStep callback is called before a step is started.
It is called with the following parameters:
model: The model that was passed intogenerateText.stopWhen: The stopping condition that was passed intogenerateText.stepNumber: The number of the step that is being executed.steps: The steps that have been executed so far.messages: The messages that will be sent to the model for the current step.experimental_context: The context passed via theexperimental_contextsetting (experimental).
You can use it to provide different settings for a step, including modifying the input messages.
import { generateText } from 'ai';
const result = await generateText({ // ... prepareStep: async ({ model, stepNumber, steps, messages }) => { if (stepNumber === 0) { return { // use a different model for this step: model: modelForThisParticularStep, // force a tool choice for this step: toolChoice: { type: 'tool', toolName: 'tool1' }, // limit the tools that are available for this step: activeTools: ['tool1'], }; }
// when nothing is returned, the default settings are used },});Message Modification for Longer Agentic Loops
In longer agentic loops, you can use the messages parameter to modify the input messages for each step. This is particularly useful for prompt compression:
prepareStep: async ({ stepNumber, steps, messages }) => { // Compress conversation history for longer loops if (messages.length > 20) { return { messages: messages.slice(-10), }; }
return {};},Response Messages
Adding the generated assistant and tool messages to your conversation history is a common task, especially if you are using multi-step tool calls.
Both generateText and streamText have a response.messages property that you can use to
add the assistant and tool messages to your conversation history.
It is also available in the onFinish callback of streamText.
The response.messages property contains an array of ModelMessage objects that you can add to your conversation history:
import { generateText, ModelMessage } from 'ai';
const messages: ModelMessage[] = [ // ...];
const { response } = await generateText({ // ... messages,});
// add the response messages to your conversation history:messages.push(...response.messages); // streamText: ...((await response).messages)Dynamic Tools
AI SDK Core supports dynamic tools for scenarios where tool schemas are not known at compile time. This is useful for:
- MCP (Model Context Protocol) tools without schemas
- User-defined functions at runtime
- Tools loaded from external sources
Using dynamicTool
The dynamicTool helper creates tools with unknown input/output types:
import { dynamicTool } from 'ai';import { z } from 'zod';
const customTool = dynamicTool({ description: 'Execute a custom function', inputSchema: z.object({}), execute: async input => { // input is typed as 'unknown' // You need to validate/cast it at runtime const { action, parameters } = input as any;
// Execute your dynamic logic return { result: `Executed ${action}` }; },});Type-Safe Handling
When using both static and dynamic tools, use the dynamic flag for type narrowing:
const result = await generateText({ model: "anthropic/claude-sonnet-4.5", tools: { // Static tool with known types weather: weatherTool, // Dynamic tool custom: dynamicTool({ /* ... */ }), }, onStepFinish: ({ toolCalls, toolResults }) => { // Type-safe iteration for (const toolCall of toolCalls) { if (toolCall.dynamic) { // Dynamic tool: input is 'unknown' console.log('Dynamic:', toolCall.toolName, toolCall.input); continue; }
// Static tool: full type inference switch (toolCall.toolName) { case 'weather': console.log(toolCall.input.location); // typed as string break; } } },});Preliminary Tool Results
You can return an AsyncIterable over multiple results.
In this case, the last value from the iterable is the final tool result.
This can be used in combination with generator functions to e.g. stream status information during the tool execution:
tool({ description: 'Get the current weather.', inputSchema: z.object({ location: z.string(), }), async *execute({ location }) { yield { status: 'loading' as const, text: `Getting weather for ${location}`, weather: undefined, };
await new Promise(resolve => setTimeout(resolve, 3000));
const temperature = 72 + Math.floor(Math.random() * 21) - 10;
yield { status: 'success' as const, text: `The weather in ${location} is ${temperature}°F`, temperature, }; },});Tool Choice
You can use the toolChoice setting to influence when a tool is selected.
It supports the following settings:
auto(default): the model can choose whether and which tools to call.required: the model must call a tool. It can choose which tool to call.none: the model must not call tools{ type: 'tool', toolName: string (typed) }: the model must call the specified tool
import { z } from 'zod';import { generateText, tool } from 'ai';
const result = await generateText({ model: "anthropic/claude-sonnet-4.5", tools: { weather: tool({ description: 'Get the weather in a location', inputSchema: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => ({ location, temperature: 72 + Math.floor(Math.random() * 21) - 10, }), }), }, toolChoice: 'required', // force the model to call a tool prompt: 'What is the weather in San Francisco?',});Tool Execution Options
When tools are called, they receive additional options as a second parameter.
Tool Call ID
The ID of the tool call is forwarded to the tool execution. You can use it e.g. when sending tool-call related information with stream data.
import { streamText, tool, createUIMessageStream, createUIMessageStreamResponse,} from 'ai';
export async function POST(req: Request) { const { messages } = await req.json();
const stream = createUIMessageStream({ execute: ({ writer }) => { const result = streamText({ // ... messages, tools: { myTool: tool({ // ... execute: async (args, { toolCallId }) => { // return e.g. custom status for tool call writer.write({ type: 'data-tool-status', id: toolCallId, data: { name: 'myTool', status: 'in-progress', }, }); // ... }, }), }, });
writer.merge(result.toUIMessageStream()); }, });
return createUIMessageStreamResponse({ stream });}Messages
The messages that were sent to the language model to initiate the response that contained the tool call are forwarded to the tool execution.
You can access them in the second parameter of the execute function.
In multi-step calls, the messages contain the text, tool calls, and tool results from all previous steps.
import { generateText, tool } from 'ai';
const result = await generateText({ // ... tools: { myTool: tool({ // ... execute: async (args, { messages }) => { // use the message history in e.g. calls to other language models return { ... }; }, }), },});Abort Signals
The abort signals from generateText and streamText are forwarded to the tool execution.
You can access them in the second parameter of the execute function and e.g. abort long-running computations or forward them to fetch calls inside tools.
import { z } from 'zod';import { generateText, tool } from 'ai';
const result = await generateText({ model: "anthropic/claude-sonnet-4.5", abortSignal: myAbortSignal, // signal that will be forwarded to tools tools: { weather: tool({ description: 'Get the weather in a location', inputSchema: z.object({ location: z.string() }), execute: async ({ location }, { abortSignal }) => { return fetch( `https://api.weatherapi.com/v1/current.json?q=${location}`, { signal: abortSignal }, // forward the abort signal to fetch ); }, }), }, prompt: 'What is the weather in San Francisco?',});Context (experimental)
You can pass in arbitrary context from generateText or streamText via the experimental_context setting.
This context is available in the experimental_context tool execution option.
const result = await generateText({ // ... tools: { someTool: tool({ // ... execute: async (input, { experimental_context: context }) => { const typedContext = context as { example: string }; // or use type validation library // ... }, }), }, experimental_context: { example: '123' },});Tool Input Lifecycle Hooks
The following tool input lifecycle hooks are available:
onInputStart: Called when the model starts generating the input (arguments) for the tool callonInputDelta: Called for each chunk of text as the input is streamedonInputAvailable: Called when the complete input is available and validated
onInputStart and onInputDelta are only called in streaming contexts (when using streamText). They are not called when using generateText.
Example
import { streamText, tool } from 'ai';import { z } from 'zod';
const result = streamText({ model: "anthropic/claude-sonnet-4.5", tools: { getWeather: tool({ description: 'Get the weather in a location', inputSchema: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => ({ temperature: 72 + Math.floor(Math.random() * 21) - 10, }), onInputStart: () => { console.log('Tool call starting'); }, onInputDelta: ({ inputTextDelta }) => { console.log('Received input chunk:', inputTextDelta); }, onInputAvailable: ({ input }) => { console.log('Complete input:', input); }, }), }, prompt: 'What is the weather in San Francisco?',});Types
Modularizing your code often requires defining types to ensure type safety and reusability. To enable this, the AI SDK provides several helper types for tools, tool calls, and tool results.
You can use them to strongly type your variables, function parameters, and return types
in parts of the code that are not directly related to streamText or generateText.
Each tool call is typed with ToolCall<NAME extends string, ARGS>, depending
on the tool that has been invoked.
Similarly, the tool results are typed with ToolResult<NAME extends string, ARGS, RESULT>.
The tools in streamText and generateText are defined as a ToolSet.
The type inference helpers TypedToolCall<TOOLS extends ToolSet>
and TypedToolResult<TOOLS extends ToolSet> can be used to
extract the tool call and tool result types from the tools.
import { TypedToolCall, TypedToolResult, generateText, tool } from 'ai';import { z } from 'zod';
const myToolSet = { firstTool: tool({ description: 'Greets the user', inputSchema: z.object({ name: z.string() }), execute: async ({ name }) => `Hello, ${name}!`, }), secondTool: tool({ description: 'Tells the user their age', inputSchema: z.object({ age: z.number() }), execute: async ({ age }) => `You are ${age} years old!`, }),};
type MyToolCall = TypedToolCall<typeof myToolSet>;type MyToolResult = TypedToolResult<typeof myToolSet>;
async function generateSomething(prompt: string): Promise<{ text: string; toolCalls: Array<MyToolCall>; // typed tool calls toolResults: Array<MyToolResult>; // typed tool results}> { return generateText({ model: "anthropic/claude-sonnet-4.5", tools: myToolSet, prompt, });}Handling Errors
The AI SDK has three tool-call related errors:
NoSuchToolError: the model tries to call a tool that is not defined in the tools objectInvalidToolInputError: the model calls a tool with inputs that do not match the tool's input schemaToolCallRepairError: an error that occurred during tool call repair
When tool execution fails (errors thrown by your tool's execute function), the AI SDK adds them as tool-error content parts to enable automated LLM roundtrips in multi-step scenarios.
generateText
generateText throws errors for tool schema validation issues and other errors, and can be handled using a try/catch block. Tool execution errors appear as tool-error parts in the result steps:
try { const result = await generateText({ //... });} catch (error) { if (NoSuchToolError.isInstance(error)) { // handle the no such tool error } else if (InvalidToolInputError.isInstance(error)) { // handle the invalid tool inputs error } else { // handle other errors }}Tool execution errors are available in the result steps:
const { steps } = await generateText({ // ...});
// check for tool errors in the stepsconst toolErrors = steps.flatMap(step => step.content.filter(part => part.type === 'tool-error'),);
toolErrors.forEach(toolError => { console.log('Tool error:', toolError.error); console.log('Tool name:', toolError.toolName); console.log('Tool input:', toolError.input);});streamText
streamText sends errors as part of the full stream. Tool execution errors appear as tool-error parts, while other errors appear as error parts.
When using toUIMessageStreamResponse, you can pass an onError function to extract the error message from the error part and forward it as part of the stream response:
const result = streamText({ // ...});
return result.toUIMessageStreamResponse({ onError: error => { if (NoSuchToolError.isInstance(error)) { return 'The model tried to call a unknown tool.'; } else if (InvalidToolInputError.isInstance(error)) { return 'The model called a tool with invalid inputs.'; } else { return 'An unknown error occurred.'; } },});Tool Call Repair
The tool call repair feature is experimental and may change in the future.
Language models sometimes fail to generate valid tool calls, especially when the input schema is complex or the model is smaller.
If you use multiple steps, those failed tool calls will be sent back to the LLM in the next step to give it an opportunity to fix it. However, you may want to control how invalid tool calls are repaired without requiring additional steps that pollute the message history.
You can use the experimental_repairToolCall function to attempt to repair the tool call
with a custom function.
You can use different strategies to repair the tool call:
- Use a model with structured outputs to generate the inputs.
- Send the messages, system prompt, and tool schema to a stronger model to generate the inputs.
- Provide more specific repair instructions based on which tool was called.
Example: Use a model with structured outputs for repair
import { openai } from '@ai-sdk/openai';import { generateObject, generateText, NoSuchToolError, tool } from 'ai';
const result = await generateText({ model, tools, prompt,
experimental_repairToolCall: async ({ toolCall, tools, inputSchema, error, }) => { if (NoSuchToolError.isInstance(error)) { return null; // do not attempt to fix invalid tool names }
const tool = tools[toolCall.toolName as keyof typeof tools];
const { object: repairedArgs } = await generateObject({ model: "anthropic/claude-sonnet-4.5", schema: tool.inputSchema, prompt: [ `The model tried to call the tool "${toolCall.toolName}"` + ` with the following inputs:`, JSON.stringify(toolCall.input), `The tool accepts the following schema:`, JSON.stringify(inputSchema(toolCall)), 'Please fix the inputs.', ].join('\n'), });
return { ...toolCall, input: JSON.stringify(repairedArgs) }; },});Example: Use the re-ask strategy for repair
import { openai } from '@ai-sdk/openai';import { generateObject, generateText, NoSuchToolError, tool } from 'ai';
const result = await generateText({ model, tools, prompt,
experimental_repairToolCall: async ({ toolCall, tools, error, messages, system, }) => { const result = await generateText({ model, system, messages: [ ...messages, { role: 'assistant', content: [ { type: 'tool-call', toolCallId: toolCall.toolCallId, toolName: toolCall.toolName, input: toolCall.input, }, ], }, { role: 'tool' as const, content: [ { type: 'tool-result', toolCallId: toolCall.toolCallId, toolName: toolCall.toolName, output: error.message, }, ], }, ], tools, });
const newToolCall = result.toolCalls.find( newToolCall => newToolCall.toolName === toolCall.toolName, );
return newToolCall != null ? { toolCallType: 'function' as const, toolCallId: toolCall.toolCallId, toolName: toolCall.toolName, input: JSON.stringify(newToolCall.input), } : null; },});Active Tools
Language models can only handle a limited number of tools at a time, depending on the model.
To allow for static typing using a large number of tools and limiting the available tools to the model at the same time,
the AI SDK provides the activeTools property.
It is an array of tool names that are currently active.
By default, the value is undefined and all tools are active.
import { openai } from '@ai-sdk/openai';import { generateText } from 'ai';
const { text } = await generateText({ model: "anthropic/claude-sonnet-4.5", tools: myToolSet, activeTools: ['firstTool'],});Multi-modal Tool Results
Multi-modal tool results are experimental and only supported by Anthropic and OpenAI.
In order to send multi-modal tool results, e.g. screenshots, back to the model, they need to be converted into a specific format.
AI SDK Core tools have an optional toModelOutput function
that converts the tool result into a content part.
Here is an example for converting a screenshot into a content part:
const result = await generateText({ model: "anthropic/claude-sonnet-4.5", tools: { computer: anthropic.tools.computer_20241022({ // ... async execute({ action, coordinate, text }) { switch (action) { case 'screenshot': { return { type: 'image', data: fs .readFileSync('./data/screenshot-editor.png') .toString('base64'), }; } default: { return `executed ${action}`; } } },
// map to tool result content for LLM consumption: toModelOutput(result) { return { type: 'content', value: typeof result === 'string' ? [{ type: 'text', text: result }] : [{ type: 'media', data: result.data, mediaType: 'image/png' }], }; }, }), }, // ...});Extracting Tools
Once you start having many tools, you might want to extract them into separate files.
The tool helper function is crucial for this, because it ensures correct type inference.
Here is an example of an extracted tool:
import { tool } from 'ai';import { z } from 'zod';
// the `tool` helper function ensures correct type inference:export const weatherTool = tool({ description: 'Get the weather in a location', inputSchema: z.object({ location: z.string().describe('The location to get the weather for'), }), execute: async ({ location }) => ({ location, temperature: 72 + Math.floor(Math.random() * 21) - 10, }),});MCP Tools
The AI SDK supports connecting to Model Context Protocol (MCP) servers to access their tools. MCP enables your AI applications to discover and use tools across various services through a standardized interface.
For detailed information about MCP tools, including initialization, transport options, and usage patterns, see the MCP Tools documentation.
AI SDK Tools vs MCP Tools
In most cases, you should define your own AI SDK tools for production applications. They provide full control, type safety, and optimal performance. MCP tools are best suited for rapid development iteration and scenarios where users bring their own tools.
| Aspect | AI SDK Tools | MCP Tools |
|---|---|---|
| Type Safety | Full static typing end-to-end | Dynamic discovery at runtime |
| Execution | Same process as your request (low latency) | Separate server (network overhead) |
| Prompt Control | Full control over descriptions and schemas | Controlled by MCP server owner |
| Schema Control | You define and optimize for your model | Controlled by MCP server owner |
| Version Management | Full visibility over updates | Can update independently (version skew risk) |
| Authentication | Same process, no additional auth required | Separate server introduces additional auth complexity |
| Best For | Production applications requiring control and performance | Development iteration, user-provided tools |
Examples
You can see tools in action using various frameworks in the following examples: