Ollama Provider
The AI SDK supports Ollama through two community providers:
- nordwestt/ollama-ai-provider-v2 - Direct HTTP API integration
- ai-sdk-ollama - Built on the official Ollama JavaScript client
Both provide language model support for the AI SDK with different approaches and feature sets.
Choosing Your Provider
The AI SDK ecosystem offers multiple Ollama providers, each optimized for different use cases:
For Simple Text Generation
nordwestt/ollama-ai-provider-v2 provides straightforward access to Ollama models with direct HTTP API calls, making it ideal for basic text generation and getting started quickly.
For Advanced Features & Tool Reliability
ai-sdk-ollama by jagreehal is recommended when you need:
- Reliable tool calling with guaranteed complete responses (solves common empty response issues)
- Web search capabilities using Ollama's new web search API for current information
- Cross-environment support with automatic detection for Node.js and browsers
- Advanced Ollama features like
mirostat,repeat_penalty,num_ctxfor fine-tuned control - Enhanced reliability with built-in error handling and retries via the official client
Key technical advantages:
- Built on the official
OllamaJavaScript client library - Supports both CommonJS and ESM module formats
- Full TypeScript support with type-safe Ollama-specific options
Both providers implement the AI SDK specification and offer excellent TypeScript support. Choose based on your project's complexity and feature requirements.
Setup
Choose and install your preferred Ollama provider:
ollama-ai-provider-v2
pnpm add ollama-ai-provider-v2
ai-sdk-ollama
pnpm add ai-sdk-ollama
Provider Instance
You can import the default provider instance ollama from ollama-ai-provider-v2:
import { ollama } from 'ollama-ai-provider-v2';If you need a customized setup, you can import createOllama from ollama-ai-provider-v2 and create a provider instance with your settings:
import { createOllama } from 'ollama-ai-provider-v2';
const ollama = createOllama({ // optional settings, e.g. baseURL: 'https://api.ollama.com',});You can use the following optional settings to customize the Ollama provider instance:
-
baseURL string
Use a different URL prefix for API calls, e.g. to use proxy servers. The default prefix is
http://localhost:11434/api. -
headers Record<string,string>
Custom headers to include in the requests.
Language Models
You can create models that call the Ollama Chat Completion API using the provider instance.
The first argument is the model id, e.g. phi3. Some models have multi-modal capabilities.
const model = ollama('phi3');You can find more models on the Ollama Library homepage.
Model Capabilities
This provider is capable of using hybrid reasoning models such as qwen3, allowing toggling of reasoning between messages.
import { ollama } from 'ollama-ai-provider-v2';import { generateText } from 'ai';
const { text } = await generateText({ model: ollama('qwen3:4b'), providerOptions: { ollama: { think: true } }, prompt: 'Write a vegetarian lasagna recipe for 4 people, but really think about it',});Embedding Models
You can create models that call the Ollama embeddings API
using the .textEmbeddingModel() factory method.
const model = ollama.textEmbeddingModel('nomic-embed-text');
const { embeddings } = await embedMany({ model: model, values: ['sunny day at the beach', 'rainy afternoon in the city'],});
console.log( `cosine similarity: ${cosineSimilarity(embeddings[0], embeddings[1])}`,);