dependencies {
implementation("io.getstream:stream-chat-android-ai-compose:$version")
}AI Integrations
The AI UI components are designed specifically for AI-first applications written in Jetpack Compose. When paired with our real-time Chat API, it makes integrating with and rendering responses from LLM providers such as ChatGPT, Gemini, Anthropic or any custom backend easier, by providing out-of-the-box components able to render Markdown, Code blocks, tables, thinking indicators, images, charts etc.
This library includes the following components which assist with this task:
StreamingText- a composable that progressively reveals text content word-by-word with smooth animation, perfect for displaying AI-generated responses in real-time, similar to ChatGPT. Includes built-in markdown rendering with support for code blocks, code fences, and Chart.js diagrams.ChatComposer- a fully featured prompt composer with attachments and speech input.SpeechToTextButton- a reusable button that records voice input and streams the recognized transcript back into your UI.AITypingIndicator- a component that can display different states of the LLM (thinking, checking external sources, etc).
You can find a complete ChatGPT clone sample that uses these components here.
Installation
The AI components are available via Maven. Add the dependency to your build.gradle.kts:
Snapshot Releases
To use snapshot releases, you need to add the Sonatype snapshot repository to your settings.gradle.kts:
dependencyResolutionManagement {
repositories {
google()
mavenCentral()
maven { url = uri("https://central.sonatype.com/repository/maven-snapshots") }
}
}Find the latest snapshot version in the Maven Central snapshot repository.
StreamingText
The StreamingText is a composable that can render markdown content efficiently. It has code syntax highlighting, supporting all the major languages. It can render most of the standard markdown content, such as tables, images, charts, etc.
Under the hood, it implements word-by-word animation, with smooth progressive reveal, similar to ChatGPT.
Here’s an example how to use it:
import io.getstream.chat.android.ai.compose.ui.component.StreamingText
@Composable
fun AssistantMessage(
text: String,
isGenerating: Boolean
) {
StreamingText(
text = text,
animate = isGenerating
)
}Additionally, you can specify the speed of the animation, with the chunkDelayMs parameter. The default value is 30ms.
AITypingIndicator
The AITypingIndicator is used to present different states of the LLM, such as “Thinking”, “Checking External Sources”, etc. You can specify any text you need. There’s also a nice animation when the indicator is shown.
import io.getstream.chat.android.ai.compose.ui.component.AITypingIndicator
@Composable
fun ThinkingIndicator() {
AITypingIndicator(
label = { Text("Thinking") }
)
}ChatComposer
The ChatComposer gives users a modern text-entry surface with attachment previews and an integrated send button. It manages state internally and passes a closure that receives every MessageData payload when the user taps send.
import io.getstream.chat.android.ai.compose.ui.component.ChatComposer
import io.getstream.chat.android.ai.compose.ui.component.MessageData
@Composable
fun MessageComposer(isStreaming: Boolean) {
ChatComposer(
onSendClick = { messageData: MessageData ->
// Handle message
sendMessage(messageData.text, messageData.attachments)
},
onStopClick = {
// Handle stopping AI streaming
stopStreaming()
},
isStreaming = isStreaming,
)
}The composer automatically shows different buttons based on state: stop button when streaming, send button when text is entered, and voice button when text is empty. It also automatically resets attachments once a message is sent.
SpeechToTextButton
SpeechToTextButton turns voice input into text using Android’s SpeechRecognizer. When tapped it asks for microphone access, records audio, and forwards the recognized transcript through its closure.
import io.getstream.chat.android.ai.compose.ui.component.SpeechToTextButton
import io.getstream.chat.android.ai.compose.ui.component.rememberSpeechToTextButtonState
@Composable
fun VoiceInput() {
val state = rememberSpeechToTextButtonState()
SpeechToTextButton(
state = state,
onTextRecognized = { transcript ->
// Handle recognized text
appendToInput(transcript)
}
)
}These components are designed to work seamlessly with our existing Chat SDK. Our developer guide explains how to get started building AI integrations with Stream and Jetpack Compose.