# AI Integrations

The AI UI components are built for AI-first React apps. Paired with the real-time [Chat API](https://getstream.io/chat/), they make it easier to integrate and render responses from LLM providers like ChatGPT, Gemini, Anthropic, or a custom backend. You get rich, out-of-the-box components for markdown, syntax-highlighted code, tables, thinking indicators, charts, and more.

This library includes:

- `StreamingMessage` - renders text/markdown/code in real time with a typewriter animation.
- `AIMessageComposer` - a full prompt composer with attachments and speech input.
- `AIStateIndicator` - keeps users informed while the AI generates a response.
- `SpeechToTextButton` - records voice input and streams the transcript into your UI.

## Best Practices

- Start with the default AI components before building custom UX.
- Keep AI state messaging clear and aligned with backend state names.
- Rate-limit streaming updates to avoid UI jank.
- Provide accessible labels for AI controls and speech input.
- Ensure model selection options are validated server-side.

We’ll keep iterating and adding components. If there’s something you use every day and want added, open an issue.

You can find a complete ChatGPT clone sample that uses these components [here](https://github.com/GetStream/chat-ai-samples/tree/main/react-native).

## Installation

The `@stream-io/chat-react-ai` SDK is available on NPM.

To install it and its peer dependencies, run:

<Tabs>

```bash label="yarn"
yarn add @stream-io/chat-react-ai
```

```bash label="pnpm"
pnpm add @stream-io/chat-react-ai
```

```bash label="npm"
npm i @stream-io/chat-react-ai
```

</Tabs>

## Components

All components are designed to work with the existing [React Chat SDK](https://getstream.io/chat/react-chat/tutorial/). The [developer guide](https://getstream.io/chat/solutions/ai-integration/) walks through AI integrations with Stream.

### `StreamingMessage`

`StreamingMessage` renders markdown efficiently, with syntax highlighting for major languages. It supports common markdown elements like tables, inline code, headings, and lists.

Under the hood, it uses a letter-by-letter typewriter animation with a character queue.

| Name                   | Type     | Required | Description                                          |
| ---------------------- | -------- | -------- | ---------------------------------------------------- |
| `text`                 | `string` | yes      | Markdown text to render.                             |
| `letterIntervalMs`     | `number` | no       | Interval between typewriter ticks. Defaults to `30`. |
| `renderingLetterCount` | `number` | no       | Letters rendered per tick. Defaults to `2`.          |

If you need a custom version, combine `AIMarkdown` with the `useMessageTextStreaming` hook.

#### Example

````tsx
const markdownText = ```
# Heading

some text

## Another heading
```;

<StreamingMessage
  text={markdownText}
  letterIntervalMs={30} // every 30ms
  renderingLetterCount={3} // render 3 letters at a time
/>;
````

### `AIStateIndicator`

`AIStateIndicator` represents LLM states (for example, “Thinking” or “Checking External Sources”) based on the states you define on your backend. You can provide custom text, or let it pick a built-in phrase. To force a new phrase, change the `key` prop to reset its internal state.

| Name   | Type     | Required | Description                          |
| ------ | -------- | -------- | ------------------------------------ |
| `text` | `string` | no       | Text displayed inside the component. |

#### Example

```tsx
{
  (aiState === "thinking" || aiState === "generating") && (
    <AIStateIndicator key={resetKey} />
  );
}

// or

{
  (aiState === "thinking" || aiState === "generating") && (
    <AIStateIndicator text="Thinking of a proper response..." />
  );
}
```

### `AIMessageComposer`

`AIMessageComposer` is a full message composer with text input, file attachments, speech-to-text, and model selection.

| Name                       | Type                                         | Required | Description                                                                            |
| -------------------------- | -------------------------------------------- | -------- | -------------------------------------------------------------------------------------- |
| `resetAttachmentsOnSelect` | `boolean`                                    | no       | Resets file input after selection. Defaults to `true`.                                 |
| `nameMapping`              | `{ message?: string; attachments?: string }` | no       | Maps custom input names to internal state. Default names: `message` and `attachments`. |
| `onSubmit`                 | `(e: FormEvent<HTMLFormElement>) => void`    | no       | Form submission handler.                                                               |
| `onChange`                 | `(e: FormEvent<HTMLFormElement>) => void`    | no       | Form change handler.                                                                   |
|                            | `...HTMLFormElement props`                   | no       | Supports all standard HTML form props.                                                 |

#### Sub-components

- **`AIMessageComposer.FileInput`** - File input button. Supports multiple file selection.
- **`AIMessageComposer.TextInput`** - Text input field, synced with composer state.
- **`AIMessageComposer.SpeechToTextButton`** - Toggles speech-to-text input via Web Speech API.
- **`AIMessageComposer.SubmitButton`** - Submits the prompt.
- **`AIMessageComposer.ModelSelect`** - Model selector dropdown. Customizable via `options`.
- **`AIMessageComposer.AttachmentPreview`** - Preview container for attachments.
- **`AIMessageComposer.AttachmentPreview.Item`** - Preview item; renders images differently for `image/*` types.

#### Example

```tsx
import { AIMessageComposer } from "@stream-io/chat-react-ai";

function ChatComposer({ attachments }: ChatComposerProps) {
  const handleSubmit = (e) => {
    e.preventDefault();
    const formData = new FormData(e.target);
    // Handle submission
  };

  return (
    <AIMessageComposer onSubmit={handleSubmit}>
      <AIMessageComposer.FileInput name="attachments" />
      <AIMessageComposer.TextInput name="message" />
      <AIMessageComposer.SpeechToTextButton />
      <AIMessageComposer.ModelSelect name="model" />
      <AIMessageComposer.SubmitButton />
      <AIMessageComposer.AttachmentPreview>
        {attachments.map((attachment) => (
          <AIMessageComposer.AttachmentPreview.Item {...attachment} />
        ))}
      </AIMessageComposer.AttachmentPreview>
    </AIMessageComposer>
  );
}
```

> [!NOTE]
> Some inputs ship with default `name` attributes (`TextInput` → `"message"`, `FileInput` → `"attachments"`). You can override these names and map them back with `nameMapping`.

### `SpeechToTextButton`

`SpeechToTextButton` handles voice input via the Web Speech API, with a built-in mic icon and listening state UI.

| Name      | Type                         | Required | Description                                                                      |
| --------- | ---------------------------- | -------- | -------------------------------------------------------------------------------- |
| `options` | `UseSpeechToTextOptions`     | no       | Speech recognition options (for example `lang`, `continuous`, `interimResults`). |
|           | `...HTMLButtonElement props` | no       | Supports standard HTML button props.                                             |

#### Example

```tsx
import { SpeechToTextButton } from "@stream-io/chat-react-ai";

function VoiceInputButton() {
  return (
    <SpeechToTextButton
      options={{
        lang: "en-US",
        continuous: false,
        interimResults: true,
      }}
    />
  );
}
```

> [!NOTE]
> Inside `AIMessageComposer`, the button updates the composer’s text input automatically. Standalone usage can be controlled via `speechToTextOptions.onTranscript`.

### AI States

The SDK supports a set of AI states that represent progress during response generation. The current list of states is here: [https://github.com/GetStream/stream-chat-react/blob/0577ffdbd2abf11b6b99a2e70caa938ea19635e9/src/components/AIStateIndicator/hooks/useAIState.ts#L7](https://github.com/GetStream/stream-chat-react/blob/0577ffdbd2abf11b6b99a2e70caa938ea19635e9/src/components/AIStateIndicator/hooks/useAIState.ts#L7).

Provided below is a brief explanation of what each state means:

- `AI_STATE_THINKING` - the AI is thinking and trying to internally craft an answer to your query
- `AI_STATE_GENERATING` - the actual response to your query is being generated
- `AI_STATE_EXTERNAL_SOURCES` - the AI is checking external resources for information
- `AI_STATE_ERROR` - the AI has reached an error state while trying to answer your query
- `AI_STATE_IDLE` - the AI is in an idle state and is not doing anything

If you are using your own implementation and have different states than these, you can feel free to override our default components as well as their behaviour.

Please, refer to [our React Assistant tutorial](https://getstream.io/blog/react-assistant/) for more in-depth guidance. It is highly encouraged to read the tutorial before integrating AI in your application.

### Theming


---

This page was last updated at 2026-04-21T09:53:41.121Z.

For the most recent version of this documentation, visit [https://getstream.io/chat/docs/sdk/react/v13/guides/ai-integrations/](https://getstream.io/chat/docs/sdk/react/v13/guides/ai-integrations/).