Migrate AI SDK 5.x to 6.0 Beta

AI SDK 6 is currently in beta and introduces new capabilities like agents and tool approval. This guide will help you migrate from AI SDK 5.0 to 6.0 Beta. Note that you may want to wait until the stable release for production projects. See the AI SDK 6 Beta announcement for more details on what's new.

  1. Backup your project. If you use a versioning control system, make sure all previous versions are committed.
  2. Upgrade to AI SDK 6.0 Beta.
  3. Follow the breaking changes guide below.
  4. Verify your project is working as expected.
  5. Commit your changes.

AI SDK 6.0 Beta Package Versions

You need to update the following packages to the beta versions in your package.json file(s):

  • ai package: 6.0.0-beta (or use the @beta dist-tag)
  • @ai-sdk/provider package: 3.0.0-beta (or use the @beta dist-tag)
  • @ai-sdk/provider-utils package: 4.0.0-beta (or use the @beta dist-tag)
  • @ai-sdk/* packages: 3.0.0-beta (or use the @beta dist-tag for other @ai-sdk packages)

An example upgrade command would be:

pnpm install ai@beta @ai-sdk/react@beta @ai-sdk/openai@beta

Codemods

The AI SDK will provide Codemod transformations to help upgrade your codebase when a feature is deprecated, removed, or otherwise changed.

Codemods are transformations that run on your codebase automatically. They allow you to easily apply many changes without having to manually go through every file.

Codemods are intended as a tool to help you with the upgrade process. They may not cover all of the changes you need to make. You may need to make additional changes manually.

Codemod Table

Codemod NameDescription
rename-text-embedding-to-embeddingRenames textEmbeddingModel to embeddingModel and textEmbedding to embedding on providers
rename-mock-v2-to-v3Renames V2 mock classes from ai/test to V3 (e.g., MockLanguageModelV2MockLanguageModelV3)
rename-tool-call-options-to-tool-execution-optionsRenames the ToolCallOptions type to ToolExecutionOptions
rename-core-message-to-model-messageRenames the CoreMessage type to ModelMessage
rename-converttocoremessages-to-converttomodelmessagesRenames convertToCoreMessages function to convertToModelMessages
rename-vertex-provider-metadata-keyRenames google to vertex in providerMetadata and providerOptions for Google Vertex files

AI SDK Core

CoreMessage Removal

The deprecated CoreMessage type and related functions have been removed (PR #10710). Replace convertToCoreMessages with convertToModelMessages.

AI SDK 5
import { convertToCoreMessages, type CoreMessage } from 'ai';
const coreMessages = convertToCoreMessages(messages); // CoreMessage[]
AI SDK 6
import { convertToModelMessages, type ModelMessage } from 'ai';
const modelMessages = convertToModelMessages(messages); // ModelMessage[]

generateObject and streamObject Deprecation

generateObject and streamObject have been deprecated (PR #10754). They will be removed in a future version. Use generateText and streamText with an output setting instead.

AI SDK 5
import { generateObject } from 'ai';
import { z } from 'zod';
const { object } = await generateObject({
model: "anthropic/claude-sonnet-4.5",
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.object({ name: z.string(), amount: z.string() })),
steps: z.array(z.string()),
}),
}),
prompt: 'Generate a lasagna recipe.',
});
AI SDK 6
import { generateText, Output } from 'ai';
import { z } from 'zod';
const { output } = await generateText({
model: "anthropic/claude-sonnet-4.5",
output: Output.object({
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(
z.object({ name: z.string(), amount: z.string() }),
),
steps: z.array(z.string()),
}),
}),
}),
prompt: 'Generate a lasagna recipe.',
});

For streaming structured data, replace streamObject with streamText:

AI SDK 5
import { streamObject } from 'ai';
import { z } from 'zod';
const { partialObjectStream } = streamObject({
model: "anthropic/claude-sonnet-4.5",
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(z.object({ name: z.string(), amount: z.string() })),
steps: z.array(z.string()),
}),
}),
prompt: 'Generate a lasagna recipe.',
});
for await (const partialObject of partialObjectStream) {
console.log(partialObject);
}
AI SDK 6
import { streamText, Output } from 'ai';
import { z } from 'zod';
const { partialOutputStream } = streamText({
model: "anthropic/claude-sonnet-4.5",
output: Output.object({
schema: z.object({
recipe: z.object({
name: z.string(),
ingredients: z.array(
z.object({ name: z.string(), amount: z.string() }),
),
steps: z.array(z.string()),
}),
}),
}),
prompt: 'Generate a lasagna recipe.',
});
for await (const partialObject of partialOutputStream) {
console.log(partialObject);
}

Learn more about generating structured data.

cachedInputTokens and reasoningTokens in LanguageModelUsage Deprecation

cachedInputTokens and reasoningTokens in LanguageModelUsage have been deprecated.

You can replace cachedInputTokens with inputTokenDetails.cacheReadTokens and reasoningTokens with outputTokenDetails.reasoningTokens.

ToolCallOptions to ToolExecutionOptions Rename

The ToolCallOptions type has been renamed to ToolExecutionOptions and is now deprecated.

Per-Tool Strict Mode

Strict mode for tools is now controlled by setting strict on each tool (PR #10817). This enables fine-grained control over strict tool calls, which is important since strict mode depends on the specific tool input schema.

AI SDK 5
import { streamText, tool } from 'ai';
import { z } from 'zod';
// Tool strict mode was controlled by strictJsonSchema
const result = streamText({
model: "anthropic/claude-sonnet-4.5",
tools: {
calculator: tool({
description: 'A simple calculator',
inputSchema: z.object({
expression: z.string(),
}),
execute: async ({ expression }) => {
const result = eval(expression);
return { result };
},
}),
},
providerOptions: {
openai: {
strictJsonSchema: true, // Applied to all tools
},
},
});
AI SDK 6
import { streamText, tool } from 'ai';
import { z } from 'zod';
const result = streamText({
model: "anthropic/claude-sonnet-4.5",
tools: {
calculator: tool({
description: 'A simple calculator',
inputSchema: z.object({
expression: z.string(),
}),
execute: async ({ expression }) => {
const result = eval(expression);
return { result };
},
strict: true, // Control strict mode per tool
}),
},
});

Flexible Tool Content

AI SDK 6 introduces more flexible tool output and result content support (PR #9605), enabling richer tool interactions and better support for complex tool execution patterns.

ToolCallRepairFunction Signature

The system parameter in the ToolCallRepairFunction type now accepts SystemModelMessage in addition to string (PR #10635). This allows for more flexible system message configuration, including provider-specific options like caching.

AI SDK 5
import type { ToolCallRepairFunction } from 'ai';
const repairToolCall: ToolCallRepairFunction<MyTools> = async ({
system, // type: string | undefined
messages,
toolCall,
tools,
inputSchema,
error,
}) => {
// ...
};
AI SDK 6
import type { ToolCallRepairFunction, SystemModelMessage } from 'ai';
const repairToolCall: ToolCallRepairFunction<MyTools> = async ({
system, // type: string | SystemModelMessage | undefined
messages,
toolCall,
tools,
inputSchema,
error,
}) => {
// Handle both string and SystemModelMessage
const systemText = typeof system === 'string' ? system : system?.content;
// ...
};

Embedding Model Method Rename

The textEmbeddingModel and textEmbedding methods on providers have been renamed to embeddingModel and embedding respectively. Additionally, generics have been removed from EmbeddingModel, embed, and embedMany (PR #10592).

AI SDK 5
import { openai } from '@ai-sdk/openai';
import { embed } from 'ai';
// Using the full method name
const model = openai.textEmbeddingModel('text-embedding-3-small');
// Using the shorthand
const model = openai.textEmbedding('text-embedding-3-small');
const { embedding } = await embed({
model: openai.textEmbedding('text-embedding-3-small'),
value: 'sunny day at the beach',
});
AI SDK 6
import { openai } from '@ai-sdk/openai';
import { embed } from 'ai';
// Using the full method name
const model = openai.embeddingModel('text-embedding-3-small');
// Using the shorthand
const model = openai.embedding('text-embedding-3-small');
const { embedding } = await embed({
model: openai.embedding('text-embedding-3-small'),
value: 'sunny day at the beach',
});

Warning Logger

AI SDK 6 introduces a warning logger that outputs deprecation warnings and best practice recommendations (PR #8343).

To disable warning logging, set the AI_SDK_LOG_WARNINGS environment variable to false:

export AI_SDK_LOG_WARNINGS=false

Warning Type Unification

Separate warning types for each generation function have been consolidated into a single Warning type exported from the ai package (PR #10631).

AI SDK 5
// Separate warning types for each generation function
import type {
CallWarning,
ImageModelCallWarning,
SpeechWarning,
TranscriptionWarning,
} from 'ai';
AI SDK 6
// Single Warning type for all generation functions
import type { Warning } from 'ai';

Providers

OpenAI

strictJsonSchema Defaults to True

The strictJsonSchema setting for JSON outputs and tool calls is enabled by default (PR #10752). This improves stability and ensures valid JSON output that matches your schema.

However, strict mode is stricter about schema requirements. If you receive schema rejection errors, adjust your schema (for example, use null instead of undefined) or disable strict mode.

AI SDK 5
import { openai } from '@ai-sdk/openai';
import { generateObject } from 'ai';
import { z } from 'zod';
// strictJsonSchema was false by default
const result = await generateObject({
model: openai('gpt-5.1'),
schema: z.object({
name: z.string(),
}),
prompt: 'Generate a person',
});
AI SDK 6
import { openai } from '@ai-sdk/openai';
import { generateObject } from 'ai';
import { z } from 'zod';
// strictJsonSchema is true by default
const result = await generateObject({
model: openai('gpt-5.1'),
schema: z.object({
name: z.string(),
}),
prompt: 'Generate a person',
});
// Disable strict mode if needed
const resultNoStrict = await generateObject({
model: openai('gpt-5.1'),
schema: z.object({
name: z.string(),
}),
prompt: 'Generate a person',
providerOptions: {
openai: {
strictJsonSchema: false,
} satisfies OpenAIResponsesProviderOptions,
},
});

structuredOutputs Option Removed from Chat Model

The structuredOutputs provider option has been removed from chat models (PR #10752). Use strictJsonSchema instead.

Unrecognized Models Treated as Reasoning Models

The @ai-sdk/openai provider now treats unrecognized model IDs as reasoning models by default (PR #9976). Previously, unrecognized models were treated as non-reasoning models.

This change impacts users who configure @ai-sdk/openai with a custom baseUrl to use non-OpenAI models. Reasoning models exclude certain parameters like temperature, which may cause unexpected behavior if the model does not support reasoning. Consider using @ai-sdk/openai-compatible instead.

Azure

Default Provider Uses Responses API

The @ai-sdk/azure provider now uses the Responses API by default when calling azure() (PR #9868). To use the previous Chat Completions API behavior, use azure.chat() instead.

AI SDK 5
import { azure } from '@ai-sdk/azure';
// Used Chat Completions API
const model = azure('gpt-4o');
AI SDK 6
import { azure } from '@ai-sdk/azure';
// Now uses Responses API by default
const model = azure('gpt-4o');
// Use azure.chat() for Chat Completions API
const chatModel = azure.chat('gpt-4o');
// Use azure.responses() explicitly for Responses API
const responsesModel = azure.responses('gpt-4o');

The Responses and Chat Completions APIs have different behavior and defaults. If you depend on the Chat Completions API, switch your model instance to azure.chat() and audit your configuration.

Anthropic

Structured Outputs Mode

Anthropic has introduced native structured outputs for Claude Sonnet 4.5 and later models . The @ai-sdk/anthropic provider now includes a structuredOutputMode option to control how structured outputs are generated (PR #10502).

The available modes are:

  • 'outputFormat': Use Anthropic's native output_format parameter
  • 'jsonTool': Use a special JSON tool to specify the structured output format
  • 'auto' (default): Use 'outputFormat' when supported by the model, otherwise fall back to 'jsonTool'
AI SDK 6
import { anthropic } from '@ai-sdk/anthropic';
import { generateObject } from 'ai';
import { z } from 'zod';
const result = await generateObject({
model: anthropic('claude-sonnet-4-5-20250929'),
schema: z.object({
name: z.string(),
age: z.number(),
}),
prompt: 'Generate a person',
providerOptions: {
anthropic: {
// Explicitly set the structured output mode (optional)
structuredOutputMode: 'outputFormat',
} satisfies AnthropicProviderOptions,
},
});

Google Vertex

providerMetadata and providerOptions Key

The @ai-sdk/google-vertex provider now uses vertex as the key for providerMetadata and providerOptions instead of google. The google key is still supported for providerOptions input, but resulting providerMetadata output now uses vertex.

AI SDK 5
import { vertex } from '@ai-sdk/google-vertex';
import { generateText } from 'ai';
const result = await generateText({
model: vertex('gemini-2.5-flash'),
providerOptions: {
google: {
safetySettings: [
/* ... */
],
}, // Used 'google' key
},
prompt: 'Hello',
});
// Accessed metadata via 'google' key
console.log(result.providerMetadata?.google?.safetyRatings);
AI SDK 6
import { vertex } from '@ai-sdk/google-vertex';
import { generateText } from 'ai';
const result = await generateText({
model: vertex('gemini-2.5-flash'),
providerOptions: {
vertex: {
safetySettings: [
/* ... */
],
}, // Now uses 'vertex' key
},
prompt: 'Hello',
});
// Access metadata via 'vertex' key
console.log(result.providerMetadata?.vertex?.safetyRatings);

ai/test

Mock Classes

V2 mock classes have been removed from the ai/test module. Use the new V3 mock classes instead for testing.

AI SDK 5
import {
MockEmbeddingModelV2,
MockImageModelV2,
MockLanguageModelV2,
MockProviderV2,
MockSpeechModelV2,
MockTranscriptionModelV2,
} from 'ai/test';
AI SDK 6
import {
MockEmbeddingModelV3,
MockImageModelV3,
MockLanguageModelV3,
MockProviderV3,
MockSpeechModelV3,
MockTranscriptionModelV3,
} from 'ai/test';