AI Integrations

The AI UI components are built for AI-first React Native apps. Paired with our real-time Chat API, they make it easier to render LLM responses (ChatGPT, Gemini, Anthropic, or your backend) with rich UI: markdown, syntax highlighting, tables, thinking indicators, charts, and more.

Core components include:

  • StreamingMessageView - renders text/markdown/code in real time with a typewriter effect
  • ComposerView - a fully featured prompt composer with attachments and speech input
  • SpeechToTextButton - a reusable button that records voice input and streams the recognized transcript back into your UI
  • AITypingIndicatorView - a component that can display different states of the LLM (thinking, checking external sources, etc)

Best Practices

  • Keep AI components wrapped in StreamTheme for consistent styling.
  • Stream tokens progressively to StreamingMessageView to avoid large re-renders.
  • Gate speech-to-text behind explicit permissions and user consent.
  • Use a single source of truth for LLM state to drive typing indicators.
  • Test markdown rendering with long responses and code blocks.

We’ll keep adding components. If you want to see something added, open an issue.

You can find a complete ChatGPT clone sample that uses these components here.

Installation

Install @stream-io/chat-react-native-ai and peer dependencies:

yarn add @stream-io/chat-react-native-ai react-native-reanimated react-native-worklets react-native-gesture-handler react-native-svg victory-native @shopify/react-native-skia @babel/plugin-proposal-export-namespace-from

Add the required babel plugins to babel.config.js:

module.exports = {
  presets: ["module:@react-native/babel-preset"],
  plugins: [
    // ... rest of your @babel plugins
    "@babel/plugin-proposal-export-namespace-from",
    // react-native-worklets/plugin has to be the last one
    "react-native-worklets/plugin",
  ],
};

To enable speech-to-text, add the required capabilities:

iOS

Within Info.plist:

<key>NSMicrophoneUsageDescription</key>
<string>$(PRODUCT_NAME) would like to access your microphone to capture your voice.</string>
<key>NSSpeechRecognitionUsageDescription</key>
<string>$(PRODUCT_NAME) would like to access speech recognition to transcribe your voice.</string>

Choose any permission strings you want.

Android

Within android/app/AndroidManifest.xml:

<uses-permission android:name="android.permission.RECORD_AUDIO" />

Expo

If you use managed Expo, the config plugin can add these permissions for you.

You can do this by adding it to your app.json/app.config.[js|ts] file like so:

"plugins": [
      // ... rest of your plugins
      [
        "@stream-io/chat-react-native-ai",
        {
          "dictationMicrophoneUsageDescription": "$(PRODUCT_NAME) would like to access your microphone to capture your voice.",
          "dictationSpeechRecognitionUsageDescription": "$(PRODUCT_NAME) would like to access speech recognition to transcribe your voice."
        }
      ]
    ],

If you use bare Expo, follow the manual iOS/Android steps above.

Optional features

Optional dependencies enable extra features.

Media Picker

Media picker lets users select images or take photos.

The SDK supports two libraries:

react-native-image-picker

This RN CLI library is meant to be used in vanilla React Native projects.

To install it, you can run:

yarn add react-native-image-picker

For image capture, add these permissions:

Info.plist:

<key>NSCameraUsageDescription</key>
<string>$(PRODUCT_NAME) would like to use your camera to share an image in a message.</string>

AndroidManifest.xml:

<uses-permission android:name="android.permission.CAMERA" />

Respectively.

expo-image-picker

This Expo library is meant to be used in Expo projects.

To install it, you can run:

npx expo install expo-image-picker

Then, you can refer to their documentation about adding permissions and add the photosPermission and cameraPermission fields.

Clipboard

Clipboard support lets users copy code blocks from rendered markdown.

The SDK has built-in support for 2 libraries that allow you to achieve this:

  • @react-native-clipboard/clipboard (for RN CLI apps)
  • expo-clipboard (for Expo apps)

Components

All components integrate with the React Native Chat SDK. See the developer guide to get started.

Wrap them in StreamTheme so theming is applied. Place it high in your component tree.

StreamingMessageView

StreamingMessageView renders markdown efficiently with syntax highlighting for common languages. It supports standard markdown: tables, inline code, headings, lists, etc.

It uses a letter-by-letter typewriter animation similar to ChatGPT.

NameTypeRequiredDescription
textstringyesThe text we want to pass as markdown.
paragraphTextNumberOfLinesbooleannoA boolean signifying if numberOfLines should be applied as a property to markdown Paragraph and Text components. Particularly useful if we want to display the same message, but in a "cut" fashion (for example when replying to someone).
rulesMarkdownRulesnoAn object of MarkdownRules that is then going to be deeply merged with our default rules, based on the SimpleMarkdown parsing engine. Can be used to add custom rules or change existing rules. You can disable a rule by passing { [ruleName]: { match: () => null }}.
onLink(url: string) => voidnoA function that is going to be invoked whenever a link is pressed within markdown parsed text.
letterIntervalnumbernoA number signifying the interval at which the typewriter animation is going to render characters. Defaults to 0
renderingLetterCountnumbernoA number signifying the number of letters that are going to be rendered per tick of the interval during the typewriter animation. Defaults to 2

Example

Example:

const markdownText = ```
# Heading

some text

## Another heading
```;

<StreamingMessageView
  text={markdownText}
  letterInterval={5} // every 5ms
  renderingLetterCount={3} // render 3 letters at a time
/>;

AITypingIndicatorView

The AITypingIndicatorView is used to represent different states of the LLM, such as Thinking, Checking External Sources and so on, depending on the states you've defined on your backend. The only thing that needs to be passed to the component is the text property, which will then be displayed with a shimmering animation.

NameTypeRequiredDescription
textstringyesThe text we want to be displayed inside of the view.

Example

<AITypingIndicatorView text={"Thinking of an answer..."} />

ComposerView

ComposerView provides a modern composer with attachments, a bottom sheet, speech-to-text, and a send button.

NameTypeRequiredDescription
onSendMessage(opts: { text: string; attachments?: MediaPickerState['assets']; custom?: Record<string, unknown>; }) => Promise<void>yesA callback that will be invoked whenever the send button is pressed. The text, attachments and any custom data we've added to the state will be passed to it.
bottomSheetOptionsBottomSheetOption[]noAn array of BottomSheetOption objects that will render the extra options in the bottom sheet.
bottomSheetInsets{ top: number; bottom: number; left: number; right: number }noAn object containing extra insets we can pass to the ComposerView in order to make sure the bottom sheet can extend properly beyond them.
isGeneratingbooleannoA boolean signifying whether the LLM is currently generating a response or not. It will be used to render the stop-generating button in the composer instead of the send button whenever this happens.
stopGenerating() => Promise<void>noA callback that is going to be invoked if the stop-generating button is clicked.
mediaPickerServiceAbstractMediaPickerServicenoAn instance of the MediaPickerService we may decide to inject from the outside for more fine-grained control over attachment state. You can create an instance as const customInstance = MediaPickerService() and it will automatically detect which library you're using.
stateStateStore<ComposerState>noA state store of the ComposerState we may decide to inject from the outside for more fine-grained control over the composer state. You can create an instance as const customComposerState = createNewComposerStore().

Example

import { useSafeAreaInsets } from "react-native-safe-area-context";

const options = [
  {
    title: "Create Image",
    subtitle: "Visualize anything",
    action: () => Alert.alert("Pressed on Create Image !"),
    Icon: DownloadArrow,
  },
  {
    title: "Thinking",
    subtitle: "Think longer for better answers",
    action: () => Alert.alert("Pressed on Thinking !"),
    Icon: Flag,
  },
];

const insets = useSafeAreaInsets();

<ComposerView
  onSendMessage={sendMessage}
  bottomSheetOptions={bottomSheetOptions}
  bottomSheetInsets={insets}
/>;

SpeechToTextButton

The SpeechToTextButton turns voice input into text using native implementations of the iOS and Android speech frameworks, respectively. When tapped it asks for microphone access, records audio and forwards the recognized transcript to the ComposerState directly.

It uses the useDictation hook under the hood, which can also be used without the button as well for voice transcription purposes outside of the button.

It takes a single property named options that has the following keys:

NameTypeRequiredDescription
languagestringnoThe language we want to transcribe from. It will default to en-US.
intermediateResultsbooleannoA boolean signifying whether we want to receive the intermediate results from the transcription or just the final result when the transcription is deemed done. Defaults to true
silenceTimeoutMsnumbernoA number signifying the number of milliseconds of silence until transcription is deemed finished. Defaults to 2500.
const options = {
  language: 'de-DE', // set the language to german
  intermediateResults: false, // disable intermediate results and only use the final results
  silenceTimeoutMs: 3500 // set the silence timeout to 3.5 seconds
}
<SpeechToTextButton options={options} />

The SpeechToTextButton is already integrated within the ComposerView, however feel free to use it elsewhere as well.

Theming

Each one of the components in the SDK is fully theme-compatible. The StreamTheme provider takes care of this for you.

In order to modify the theme, you may refer to our full fledged theme object as seen here.

Example

In the example below, we introduce a dark color scheme through the theming system.

const customTheme = {
  colors: colorScheme === 'dark'
    ? {
      accent_blue: '#4C9DFF',
      accent_red: '#FF636E',
      black: '#FFFFFF',
      code_block: '#1E1E22',
      grey: '#A1A1AA',
      grey_neutral: '#C5C5C8',
      grey_dark: '#71717A',
      grey_gainsboro: '#3F3F46',
      grey_whisper: '#27272F',
      overlay: '#000000CC',
      transparent: 'transparent',
      white: '#050509',
      white_smoke: '#121214',
      shimmer: '#FFFFFF',
    } : {}
}

<StreamTheme style={customTheme}>{children}</StreamTheme>