Chatbot
An example of how to use the AI Elements to build a chatbot.
An example of how to use the AI Elements to build a chatbot.
Can you explain how to use React hooks effectively?
React Hooks Best Practices
React hooks are a powerful feature that let you use state and other React features without writing classes. Here are some tips for using them effectively:
Rules of Hooks
- Only call hooks at the top level of your component or custom hooks
- Don't call hooks inside loops, conditions, or nested functions
Common Hooks
- useState: For local component state
- useEffect: For side effects like data fetching
- useContext: For consuming context
- useReducer: For complex state logic
- useCallback: For memoizing functions
- useMemo: For memoizing values
Example of useState and useEffect
Would you like me to explain any specific hook in more detail?
Yes, could you explain useCallback and useMemo in more detail? When should I use one over the other?
useCallback vs useMemo
Both hooks help with performance optimization, but they serve different purposes:
useCallback
useCallback memoizes functions to prevent unnecessary re-renders of child components that receive functions as props.
useMemo
useMemo memoizes values to avoid expensive recalculations on every render.
When to use which?
-
Use useCallback when:
- Passing callbacks to optimized child components that rely on reference equality
- Working with event handlers that you pass to child components
-
Use useMemo when:
- You have computationally expensive calculations
- You want to avoid recreating objects that are used as dependencies for other hooks
Performance Note
Don't overuse these hooks! They come with their own overhead. Only use them when you have identified a genuine performance issue.
"use client";import { Branch, BranchMessages, BranchNext, BranchPage, BranchPrevious, BranchSelector,} from "@/components/ai-elements/elements/branch";import { Conversation, ConversationContent, ConversationScrollButton,} from "@/components/ai-elements/elements/conversation";import { Message, MessageAvatar, MessageContent } from "@/components/ai-elements/elements/message";import { PromptInput, PromptInputActionAddAttachments, PromptInputActionMenu, PromptInputActionMenuContent, PromptInputActionMenuTrigger, PromptInputAttachment, PromptInputAttachments, PromptInputBody, PromptInputButton, PromptInputFooter, type PromptInputMessage, PromptInputModelSelect, PromptInputModelSelectContent, PromptInputModelSelectItem, PromptInputModelSelectTrigger, PromptInputModelSelectValue, PromptInputSubmit, PromptInputTextarea, PromptInputTools,} from "@/components/ai-elements/elements/prompt-input";import { Reasoning, ReasoningContent, ReasoningTrigger,} from "@/components/ai-elements/elements/reasoning";import { Response } from "@/components/ai-elements/elements/response";import { Source, Sources, SourcesContent, SourcesTrigger,} from "@/components/ai-elements/elements/sources";import { Suggestion, Suggestions } from "@/components/ai-elements/elements/suggestion";import type { ToolUIPart } from "ai";import { GlobeIcon, MicIcon } from "lucide-react";import { nanoid } from "nanoid";import { useCallback, useState } from "react";import { toast } from "sonner";type MessageType = { key: string; from: "user" | "assistant"; sources?: { href: string; title: string }[]; versions: { id: string; content: string; }[]; reasoning?: { content: string; duration: number; }; tools?: { name: string; description: string; status: ToolUIPart["state"]; parameters: Record<string, unknown>; result: string | undefined; error: string | undefined; }[]; avatar: string; name: string;};const initialMessages: MessageType[] = [ { key: nanoid(), from: "user", versions: [ { id: nanoid(), content: "Can you explain how to use React hooks effectively?", }, ], avatar: "https://github.com/haydenbleasel.png", name: "Hayden Bleasel", }, { key: nanoid(), from: "assistant", sources: [ { href: "https://react.dev/reference/react", title: "React Documentation", }, { href: "https://react.dev/reference/react-dom", title: "React DOM Documentation", }, ], tools: [ { name: "mcp", description: "Searching React documentation", status: "input-available", parameters: { query: "React hooks best practices", source: "react.dev", }, result: `{ "query": "React hooks best practices", "results": [ { "title": "Rules of Hooks", "url": "https://react.dev/warnings/invalid-hook-call-warning", "snippet": "Hooks must be called at the top level of your React function components or custom hooks. Don't call hooks inside loops, conditions, or nested functions." }, { "title": "useState Hook", "url": "https://react.dev/reference/react/useState", "snippet": "useState is a React Hook that lets you add state to your function components. It returns an array with two values: the current state and a function to update it." }, { "title": "useEffect Hook", "url": "https://react.dev/reference/react/useEffect", "snippet": "useEffect lets you synchronize a component with external systems. It runs after render and can be used to perform side effects like data fetching." } ]}`, error: undefined, }, ], versions: [ { id: nanoid(), content: `# React Hooks Best PracticesReact hooks are a powerful feature that let you use state and other React features without writing classes. Here are some tips for using them effectively:## Rules of Hooks1. **Only call hooks at the top level** of your component or custom hooks2. **Don't call hooks inside loops, conditions, or nested functions**## Common Hooks- **useState**: For local component state- **useEffect**: For side effects like data fetching- **useContext**: For consuming context- **useReducer**: For complex state logic- **useCallback**: For memoizing functions- **useMemo**: For memoizing values## Example of useState and useEffect\`\`\`jsxfunction ProfilePage({ userId }) { const [user, setUser] = useState(null); useEffect(() => { // This runs after render and when userId changes fetchUser(userId).then(userData => { setUser(userData); }); }, [userId]); return user ? <Profile user={user} /> : <Loading />;}\`\`\`Would you like me to explain any specific hook in more detail?`, }, ], avatar: "https://github.com/openai.png", name: "OpenAI", }, { key: nanoid(), from: "user", versions: [ { id: nanoid(), content: "Yes, could you explain useCallback and useMemo in more detail? When should I use one over the other?", }, { id: nanoid(), content: "I'm particularly interested in understanding the performance implications of useCallback and useMemo. Could you break down when each is most appropriate?", }, { id: nanoid(), content: "Thanks for the overview! Could you dive deeper into the specific use cases where useCallback and useMemo make the biggest difference in React applications?", }, ], avatar: "https://github.com/haydenbleasel.png", name: "Hayden Bleasel", }, { key: nanoid(), from: "assistant", reasoning: { content: `The user is asking for a detailed explanation of useCallback and useMemo. I should provide a clear and concise explanation of each hook's purpose and how they differ. The useCallback hook is used to memoize functions to prevent unnecessary re-renders of child components that receive functions as props.The useMemo hook is used to memoize values to avoid expensive recalculations on every render.Both hooks help with performance optimization, but they serve different purposes.`, duration: 10, }, versions: [ { id: nanoid(), content: `## useCallback vs useMemoBoth hooks help with performance optimization, but they serve different purposes:### useCallback\`useCallback\` memoizes **functions** to prevent unnecessary re-renders of child components that receive functions as props.\`\`\`jsx// Without useCallback - a new function is created on every renderconst handleClick = () => { console.log(count);};// With useCallback - the function is only recreated when dependencies changeconst handleClick = useCallback(() => { console.log(count);}, [count]);\`\`\`### useMemo\`useMemo\` memoizes **values** to avoid expensive recalculations on every render.\`\`\`jsx// Without useMemo - expensive calculation runs on every renderconst sortedList = expensiveSort(items);// With useMemo - calculation only runs when items changeconst sortedList = useMemo(() => expensiveSort(items), [items]);\`\`\`### When to use which?- Use **useCallback** when: - Passing callbacks to optimized child components that rely on reference equality - Working with event handlers that you pass to child components- Use **useMemo** when: - You have computationally expensive calculations - You want to avoid recreating objects that are used as dependencies for other hooks### Performance NoteDon't overuse these hooks! They come with their own overhead. Only use them when you have identified a genuine performance issue.`, }, ], avatar: "https://github.com/openai.png", name: "OpenAI", },];const models = [ { id: "gpt-4", name: "GPT-4" }, { id: "gpt-3.5-turbo", name: "GPT-3.5 Turbo" }, { id: "claude-2", name: "Claude 2" }, { id: "claude-instant", name: "Claude Instant" }, { id: "palm-2", name: "PaLM 2" }, { id: "llama-2-70b", name: "Llama 2 70B" }, { id: "llama-2-13b", name: "Llama 2 13B" }, { id: "cohere-command", name: "Command" }, { id: "mistral-7b", name: "Mistral 7B" },];const suggestions = [ "What are the latest trends in AI?", "How does machine learning work?", "Explain quantum computing", "Best practices for React development", "Tell me about TypeScript benefits", "How to optimize database queries?", "What is the difference between SQL and NoSQL?", "Explain cloud computing basics",];const mockResponses = [ "That's a great question! Let me help you understand this concept better. The key thing to remember is that proper implementation requires careful consideration of the underlying principles and best practices in the field.", "I'd be happy to explain this topic in detail. From my understanding, there are several important factors to consider when approaching this problem. Let me break it down step by step for you.", "This is an interesting topic that comes up frequently. The solution typically involves understanding the core concepts and applying them in the right context. Here's what I recommend...", "Great choice of topic! This is something that many developers encounter. The approach I'd suggest is to start with the fundamentals and then build up to more complex scenarios.", "That's definitely worth exploring. From what I can see, the best way to handle this is to consider both the theoretical aspects and practical implementation details.",];const Example = () => { const [model, setModel] = useState<string>(models[0].id); const [text, setText] = useState<string>(""); const [useWebSearch, setUseWebSearch] = useState<boolean>(false); const [useMicrophone, setUseMicrophone] = useState<boolean>(false); const [status, setStatus] = useState< "submitted" | "streaming" | "ready" | "error" >("ready"); const [messages, setMessages] = useState<MessageType[]>(initialMessages); const [streamingMessageId, setStreamingMessageId] = useState<string | null>( null ); const streamResponse = useCallback( async (messageId: string, content: string) => { setStatus("streaming"); setStreamingMessageId(messageId); const words = content.split(" "); let currentContent = ""; for (let i = 0; i < words.length; i++) { currentContent += (i > 0 ? " " : "") + words[i]; setMessages((prev) => prev.map((msg) => { if (msg.versions.some((v) => v.id === messageId)) { return { ...msg, versions: msg.versions.map((v) => v.id === messageId ? { ...v, content: currentContent } : v ), }; } return msg; }) ); await new Promise((resolve) => setTimeout(resolve, Math.random() * 100 + 50) ); } setStatus("ready"); setStreamingMessageId(null); }, [] ); const addUserMessage = useCallback( (content: string) => { const userMessage: MessageType = { key: `user-${Date.now()}`, from: "user", versions: [ { id: `user-${Date.now()}`, content, }, ], avatar: "https://github.com/haydenbleasel.png", name: "User", }; setMessages((prev) => [...prev, userMessage]); setTimeout(() => { const assistantMessageId = `assistant-${Date.now()}`; const randomResponse = mockResponses[Math.floor(Math.random() * mockResponses.length)]; const assistantMessage: MessageType = { key: `assistant-${Date.now()}`, from: "assistant", versions: [ { id: assistantMessageId, content: "", }, ], avatar: "https://github.com/openai.png", name: "Assistant", }; setMessages((prev) => [...prev, assistantMessage]); streamResponse(assistantMessageId, randomResponse); }, 500); }, [streamResponse] ); const handleSubmit = (message: PromptInputMessage) => { const hasText = Boolean(message.text); const hasAttachments = Boolean(message.files?.length); if (!(hasText || hasAttachments)) { return; } setStatus("submitted"); if (message.files?.length) { toast.success("Files attached", { description: `${message.files.length} file(s) attached to message`, }); } addUserMessage(message.text || "Sent with attachments"); setText(""); }; const handleSuggestionClick = (suggestion: string) => { setStatus("submitted"); addUserMessage(suggestion); }; return ( <div className="relative flex size-full flex-col divide-y overflow-hidden"> <Conversation> <ConversationContent> {messages.map(({ versions, ...message }) => ( <Branch defaultBranch={0} key={message.key}> <BranchMessages> {versions.map((version) => ( <Message from={message.from} key={`${message.key}-${version.id}`} > <div> {message.sources?.length && ( <Sources> <SourcesTrigger count={message.sources.length} /> <SourcesContent> {message.sources.map((source) => ( <Source href={source.href} key={source.href} title={source.title} /> ))} </SourcesContent> </Sources> )} {message.reasoning && ( <Reasoning duration={message.reasoning.duration}> <ReasoningTrigger /> <ReasoningContent> {message.reasoning.content} </ReasoningContent> </Reasoning> )} <MessageContent> <Response>{version.content}</Response> </MessageContent> </div> <MessageAvatar name={message.name} src={message.avatar} /> </Message> ))} </BranchMessages> {versions.length > 1 && ( <BranchSelector from={message.from}> <BranchPrevious /> <BranchPage /> <BranchNext /> </BranchSelector> )} </Branch> ))} </ConversationContent> <ConversationScrollButton /> </Conversation> <div className="grid shrink-0 gap-4 pt-4"> <Suggestions className="px-4"> {suggestions.map((suggestion) => ( <Suggestion key={suggestion} onClick={() => handleSuggestionClick(suggestion)} suggestion={suggestion} /> ))} </Suggestions> <div className="w-full px-4 pb-4"> <PromptInput globalDrop multiple onSubmit={handleSubmit}> <PromptInputBody> <PromptInputAttachments> {(attachment) => <PromptInputAttachment data={attachment} />} </PromptInputAttachments> <PromptInputTextarea onChange={(event) => setText(event.target.value)} value={text} /> </PromptInputBody> <PromptInputFooter> <PromptInputTools> <PromptInputActionMenu> <PromptInputActionMenuTrigger /> <PromptInputActionMenuContent> <PromptInputActionAddAttachments /> </PromptInputActionMenuContent> </PromptInputActionMenu> <PromptInputButton onClick={() => setUseMicrophone(!useMicrophone)} variant={useMicrophone ? "default" : "ghost"} > <MicIcon size={16} /> <span className="sr-only">Microphone</span> </PromptInputButton> <PromptInputButton onClick={() => setUseWebSearch(!useWebSearch)} variant={useWebSearch ? "default" : "ghost"} > <GlobeIcon size={16} /> <span>Search</span> </PromptInputButton> <PromptInputModelSelect onValueChange={setModel} value={model}> <PromptInputModelSelectTrigger> <PromptInputModelSelectValue /> </PromptInputModelSelectTrigger> <PromptInputModelSelectContent> {models.map((model) => ( <PromptInputModelSelectItem key={model.id} value={model.id} > {model.name} </PromptInputModelSelectItem> ))} </PromptInputModelSelectContent> </PromptInputModelSelect> </PromptInputTools> <PromptInputSubmit disabled={!(text.trim() || status) || status === "streaming"} status={status} /> </PromptInputFooter> </PromptInput> </div> </div> </div> );};export default Example;Tutorial
Let's walk through how to build a chatbot using AI Elements and AI SDK. Our example will include reasoning, web search with citations, and a model picker.
Setup
First, set up a new Next.js repo and cd into it by running the following command (make sure you choose to use Tailwind the project setup):
npx create-next-app@latest ai-chatbot && cd ai-chatbotRun the following command to install AI Elements. This will also set up shadcn/ui if you haven't already configured it:
npx ai-elements@latestNow, install the AI SDK dependencies:
npm i ai @ai-sdk/react zodIn order to use the providers, let's configure an AI Gateway API key. Create a .env.local in your root directory and navigate here to create a token, then paste it in your .env.local.
We're now ready to start building our app!
Client
In your app/page.tsx, replace the code with the file below.
Here, we use the PromptInput component with its compound components to build a rich input experience with file attachments, model picker, and action menu. The input component uses the new PromptInputMessage type for handling both text and file attachments.
The whole chat lives in a Conversation. We switch on message.parts and render the respective part within Message, Reasoning, and Sources. We also use status from useChat to stream reasoning tokens, as well as render Loader.
'use client';
import {
Conversation,
ConversationContent,
ConversationScrollButton,
} from '@/components/ai-elements/conversation';
import { Message, MessageContent } from '@/components/ai-elements/message';
import {
PromptInput,
PromptInputActionAddAttachments,
PromptInputActionMenu,
PromptInputActionMenuContent,
PromptInputActionMenuTrigger,
PromptInputAttachment,
PromptInputAttachments,
PromptInputBody,
PromptInputButton,
type PromptInputMessage,
PromptInputModelSelect,
PromptInputModelSelectContent,
PromptInputModelSelectItem,
PromptInputModelSelectTrigger,
PromptInputModelSelectValue,
PromptInputSubmit,
PromptInputTextarea,
PromptInputFooter,
PromptInputTools,
} from '@/components/ai-elements/prompt-input';
import { Action, Actions } from '@/components/ai-elements/actions';
import { Fragment, useState } from 'react';
import { useChat } from '@ai-sdk/react';
import { Response } from '@/components/ai-elements/response';
import { CopyIcon, GlobeIcon, RefreshCcwIcon } from 'lucide-react';
import {
Source,
Sources,
SourcesContent,
SourcesTrigger,
} from '@/components/ai-elements/sources';
import {
Reasoning,
ReasoningContent,
ReasoningTrigger,
} from '@/components/ai-elements/reasoning';
import { Loader } from '@/components/ai-elements/loader';
const models = [
{
name: 'GPT 4o',
value: 'openai/gpt-4o',
},
{
name: 'Deepseek R1',
value: 'deepseek/deepseek-r1',
},
];
const ChatBotDemo = () => {
const [input, setInput] = useState('');
const [model, setModel] = useState<string>(models[0].value);
const [webSearch, setWebSearch] = useState(false);
const { messages, sendMessage, status, regenerate } = useChat();
const handleSubmit = (message: PromptInputMessage) => {
const hasText = Boolean(message.text);
const hasAttachments = Boolean(message.files?.length);
if (!(hasText || hasAttachments)) {
return;
}
sendMessage(
{
text: message.text || 'Sent with attachments',
files: message.files
},
{
body: {
model: model,
webSearch: webSearch,
},
},
);
setInput('');
};
return (
<div className="max-w-4xl mx-auto p-6 relative size-full h-screen">
<div className="flex flex-col h-full">
<Conversation className="h-full">
<ConversationContent>
{messages.map((message) => (
<div key={message.id}>
{message.role === 'assistant' && message.parts.filter((part) => part.type === 'source-url').length > 0 && (
<Sources>
<SourcesTrigger
count={
message.parts.filter(
(part) => part.type === 'source-url',
).length
}
/>
{message.parts.filter((part) => part.type === 'source-url').map((part, i) => (
<SourcesContent key={`${message.id}-${i}`}>
<Source
key={`${message.id}-${i}`}
href={part.url}
title={part.url}
/>
</SourcesContent>
))}
</Sources>
)}
{message.parts.map((part, i) => {
switch (part.type) {
case 'text':
return (
<Fragment key={`${message.id}-${i}`}>
<Message from={message.role}>
<MessageContent>
<Response>
{part.text}
</Response>
</MessageContent>
</Message>
{message.role === 'assistant' && i === messages.length - 1 && (
<Actions className="mt-2">
<Action
onClick={() => regenerate()}
label="Retry"
>
<RefreshCcwIcon className="size-3" />
</Action>
<Action
onClick={() =>
navigator.clipboard.writeText(part.text)
}
label="Copy"
>
<CopyIcon className="size-3" />
</Action>
</Actions>
)}
</Fragment>
);
case 'reasoning':
return (
<Reasoning
key={`${message.id}-${i}`}
className="w-full"
isStreaming={status === 'streaming' && i === message.parts.length - 1 && message.id === messages.at(-1)?.id}
>
<ReasoningTrigger />
<ReasoningContent>{part.text}</ReasoningContent>
</Reasoning>
);
default:
return null;
}
})}
</div>
))}
{status === 'submitted' && <Loader />}
</ConversationContent>
<ConversationScrollButton />
</Conversation>
<PromptInput onSubmit={handleSubmit} className="mt-4" globalDrop multiple>
<PromptInputBody>
<PromptInputAttachments>
{(attachment) => <PromptInputAttachment data={attachment} />}
</PromptInputAttachments>
<PromptInputTextarea
onChange={(e) => setInput(e.target.value)}
value={input}
/>
</PromptInputBody>
<PromptInputFooter>
<PromptInputTools>
<PromptInputActionMenu>
<PromptInputActionMenuTrigger />
<PromptInputActionMenuContent>
<PromptInputActionAddAttachments />
</PromptInputActionMenuContent>
</PromptInputActionMenu>
<PromptInputButton
variant={webSearch ? 'default' : 'ghost'}
onClick={() => setWebSearch(!webSearch)}
>
<GlobeIcon size={16} />
<span>Search</span>
</PromptInputButton>
<PromptInputModelSelect
onValueChange={(value) => {
setModel(value);
}}
value={model}
>
<PromptInputModelSelectTrigger>
<PromptInputModelSelectValue />
</PromptInputModelSelectTrigger>
<PromptInputModelSelectContent>
{models.map((model) => (
<PromptInputModelSelectItem key={model.value} value={model.value}>
{model.name}
</PromptInputModelSelectItem>
))}
</PromptInputModelSelectContent>
</PromptInputModelSelect>
</PromptInputTools>
<PromptInputSubmit disabled={!input && !status} status={status} />
</PromptInputFooter>
</PromptInput>
</div>
</div>
);
};
export default ChatBotDemo;Server
Create a new route handler app/api/chat/route.ts and paste in the following code. We're using perplexity/sonar for web search because by default the model returns search results. We also pass sendSources and sendReasoning to toUIMessageStreamResponse in order to receive as parts on the frontend. The handler now also accepts file attachments from the client.
import { streamText, UIMessage, convertToModelMessages } from 'ai';
// Allow streaming responses up to 30 seconds
export const maxDuration = 30;
export async function POST(req: Request) {
const {
messages,
model,
webSearch,
}: {
messages: UIMessage[];
model: string;
webSearch: boolean;
} = await req.json();
const result = streamText({
model: webSearch ? 'perplexity/sonar' : model,
messages: convertToModelMessages(messages),
system:
'You are a helpful assistant that can answer questions and help with tasks',
});
// send sources and reasoning back to the client
return result.toUIMessageStreamResponse({
sendSources: true,
sendReasoning: true,
});
}You now have a working chatbot app with file attachment support! The chatbot can handle both text and file inputs through the action menu. Feel free to explore other components like Tool or Task to extend your app, or view the other examples.