yarn add @stream-io/chat-react-aiAI Integrations
The AI UI components are built for AI-first React apps. Paired with the real-time Chat API, they make it easier to integrate and render responses from LLM providers like ChatGPT, Gemini, Anthropic, or a custom backend. You get rich, out-of-the-box components for markdown, syntax-highlighted code, tables, thinking indicators, charts, and more.
This library includes:
StreamingMessage- renders text/markdown/code in real time with a typewriter animation.AIMessageComposer- a full prompt composer with attachments and speech input.AIStateIndicator- keeps users informed while the AI generates a response.SpeechToTextButton- records voice input and streams the transcript into your UI.
Best Practices
- Start with the default AI components before building custom UX.
- Keep AI state messaging clear and aligned with backend state names.
- Rate-limit streaming updates to avoid UI jank.
- Provide accessible labels for AI controls and speech input.
- Ensure model selection options are validated server-side.
We’ll keep iterating and adding components. If there’s something you use every day and want added, open an issue.
You can find a complete ChatGPT clone sample that uses these components here.
Installation
The @stream-io/chat-react-ai SDK is available on NPM.
To install it and its peer dependencies, run:
pnpm add @stream-io/chat-react-ainpm i @stream-io/chat-react-aiComponents
All components are designed to work with the existing React Chat SDK. The developer guide walks through AI integrations with Stream.
StreamingMessage
StreamingMessage renders markdown efficiently, with syntax highlighting for major languages. It supports common markdown elements like tables, inline code, headings, and lists.
Under the hood, it uses a letter-by-letter typewriter animation with a character queue.
| Name | Type | Required | Description |
|---|---|---|---|
text | string | yes | Markdown text to render. |
letterIntervalMs | number | no | Interval between typewriter ticks. Defaults to 30. |
renderingLetterCount | number | no | Letters rendered per tick. Defaults to 2. |
If you need a custom version, combine AIMarkdown with the useMessageTextStreaming hook.
Example
const markdownText = ```
# Heading
some text
## Another heading
```;
<StreamingMessage
text={markdownText}
letterIntervalMs={30} // every 30ms
renderingLetterCount={3} // render 3 letters at a time
/>;AIStateIndicator
AIStateIndicator represents LLM states (for example, “Thinking” or “Checking External Sources”) based on the states you define on your backend. You can provide custom text, or let it pick a built-in phrase. To force a new phrase, change the key prop to reset its internal state.
| Name | Type | Required | Description |
|---|---|---|---|
text | string | no | Text displayed inside the component. |
Example
{
(aiState === "thinking" || aiState === "generating") && (
<AIStateIndicator key={resetKey} />
);
}
// or
{
(aiState === "thinking" || aiState === "generating") && (
<AIStateIndicator text="Thinking of a proper response..." />
);
}AIMessageComposer
AIMessageComposer is a full message composer with text input, file attachments, speech-to-text, and model selection.
| Name | Type | Required | Description |
|---|---|---|---|
resetAttachmentsOnSelect | boolean | no | Resets file input after selection. Defaults to true. |
nameMapping | { message?: string; attachments?: string } | no | Maps custom input names to internal state. Default names: message and attachments. |
onSubmit | (e: FormEvent<HTMLFormElement>) => void | no | Form submission handler. |
onChange | (e: FormEvent<HTMLFormElement>) => void | no | Form change handler. |
...HTMLFormElement props | no | Supports all standard HTML form props. |
Sub-components
AIMessageComposer.FileInput- File input button. Supports multiple file selection.AIMessageComposer.TextInput- Text input field, synced with composer state.AIMessageComposer.SpeechToTextButton- Toggles speech-to-text input via Web Speech API.AIMessageComposer.SubmitButton- Submits the prompt.AIMessageComposer.ModelSelect- Model selector dropdown. Customizable viaoptions.AIMessageComposer.AttachmentPreview- Preview container for attachments.AIMessageComposer.AttachmentPreview.Item- Preview item; renders images differently forimage/*types.
Example
import { AIMessageComposer } from "@stream-io/chat-react-ai";
function ChatComposer({ attachments }: ChatComposerProps) {
const handleSubmit = (e) => {
e.preventDefault();
const formData = new FormData(e.target);
// Handle submission
};
return (
<AIMessageComposer onSubmit={handleSubmit}>
<AIMessageComposer.FileInput name="attachments" />
<AIMessageComposer.TextInput name="message" />
<AIMessageComposer.SpeechToTextButton />
<AIMessageComposer.ModelSelect name="model" />
<AIMessageComposer.SubmitButton />
<AIMessageComposer.AttachmentPreview>
{attachments.map((attachment) => (
<AIMessageComposer.AttachmentPreview.Item {...attachment} />
))}
</AIMessageComposer.AttachmentPreview>
</AIMessageComposer>
);
}[!NOTE] Some inputs ship with default
nameattributes (TextInput→"message",FileInput→"attachments"). You can override these names and map them back withnameMapping.
SpeechToTextButton
SpeechToTextButton handles voice input via the Web Speech API, with a built-in mic icon and listening state UI.
| Name | Type | Required | Description |
|---|---|---|---|
options | UseSpeechToTextOptions | no | Speech recognition options (for example lang, continuous, interimResults). |
...HTMLButtonElement props | no | Supports standard HTML button props. |
Example
import { SpeechToTextButton } from "@stream-io/chat-react-ai";
function VoiceInputButton() {
return (
<SpeechToTextButton
options={{
lang: "en-US",
continuous: false,
interimResults: true,
}}
/>
);
}[!NOTE] Inside
AIMessageComposer, the button updates the composer’s text input automatically. Standalone usage can be controlled viaspeechToTextOptions.onTranscript.
AI States
The SDK supports a set of AI states that represent progress during response generation. The current list of states is here: https://github.com/GetStream/stream-chat-react/blob/0577ffdbd2abf11b6b99a2e70caa938ea19635e9/src/components/AIStateIndicator/hooks/useAIState.ts#L7.
Provided below is a brief explanation of what each state means:
AI_STATE_THINKING- the AI is thinking and trying to internally craft an answer to your queryAI_STATE_GENERATING- the actual response to your query is being generatedAI_STATE_EXTERNAL_SOURCES- the AI is checking external resources for informationAI_STATE_ERROR- the AI has reached an error state while trying to answer your queryAI_STATE_IDLE- the AI is in an idle state and is not doing anything
If you are using your own implementation and have different states than these, you can feel free to override our default components as well as their behaviour.
Please, refer to our React Assistant tutorial for more in-depth guidance. It is highly encouraged to read the tutorial before integrating AI in your application.