Build low-latency Vision AI applications using our new open-source Vision AI SDK. ⭐️ on GitHub ->

Build an AI Assistant for Android Using Compose

New
15 min read
Andre Rego
Andre Rego
Published December 5, 2025

In this tutorial, we will demonstrate how easy it is to create an AI assistant for Android using Stream Chat. In this example, we will use the Stream Chat integration with Vercel's AI SDK; however, developers are free to use whichever LLM provider they like and still benefit from Stream's rich UI support for Markdown, tables, code samples, charts etc. To follow along with this tutorial, we recommend creating a free account and checking out our main Android chat SDK tutorial as a refresher.

Here's a video of the end result:

Running the Backend

Before adding AI features to our Android app, let's set up our node.js backend. The backend will expose two methods for starting and stopping an AI agent for a particular channel. If the agent is started, it listens to all new messages and sends them to the LLM provider of your choice. It provides the results by sending a message and updating its text.

The sample also supports sending different states of the typing indicator (for example, Thinking, Checking external sources, etc), suggestions, summaries, memory with mem0 and much more.

You can find a working implementation of the backend here.

1. Install dependencies

bash
1
npm install @stream-io/chat-ai-sdk express cors dotenv

@stream-io/chat-ai-sdk brings the Agent, AgentManager, tool helpers, and the streaming logic. Express/cors/dotenv provide the basic HTTP server.

2. Configure Stream credentials

Create a .env file with:

STREAM_API_KEY=your_key
STREAM_API_SECRET=your_secret
OPENAI_API_KEY=your_open_api_key
ANTHROPIC_API_KEY=your_anthropic_api_key
XAI_API_KEY=your_xai_api_key
GOOGLE_API_KEY=your_google_api_key
MEM0_API_KEY=your_mem0_api_key

Apart from the Stream API key and secret, every other API key is optional. You would need to add at least one LLM service, for the AI to work.

Load it at the top of your entry file:

typescript
1
import 'dotenv/config';

3. Bootstrap Express and the AgentManager

Add the following to your index.ts (or similar):

typescript
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
import express from 'express'; import cors from 'cors'; import { AgentManager, AgentPlatform, createDefaultTools, } from '@stream-io/chat-ai-sdk'; const app = express(); app.use(express.json()); app.use(cors({ origin: '*' })); const buildAgentUserId = (channelId: string): string => `ai-bot-${channelId.replace(/!/g, '')}`; const agentManager = new AgentManager({ serverToolsFactory: () => createDefaultTools(), agentIdResolver: buildAgentUserId, }); const normalizeChannelId = (raw: string): string => { const trimmed = raw?.trim() ?? ''; if (!trimmed) return trimmed; const parts = trimmed.split(':'); return parts.length > 1 ? parts[1] : trimmed; };

AgentManager owns the agent cache, pending state, and inactivity cleanup. Each channel uses an ID pattern such as ai-bot-{channelId}.

4. Starting an Agent

Next, let's add the endpoint that will start the AI agent. First, we need to validate the payload, normalize the channel, and then ask the AgentManager to start or reuse the agent:

typescript
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
app.post('/start-ai-agent', async (req, res) => { const { channel_id, channel_type = 'messaging', platform = AgentPlatform.ANTHROPIC, model, } = req.body; if (!channel_id) { res.status(400).json({ error: 'Missing channel_id' }); return; } const channelId = normalizeChannelId(channel_id); if (!channelId) { res.status(400).json({ error: 'Invalid channel_id' }); return; } try { await agentManager.startAgent({ userId: buildAgentUserId(channelId), channelId, channelType: channel_type, platform, model, }); res.json({ message: 'AI Agent started' }); } catch (error) { res.status(500).json({ error: 'Failed to start AI Agent', reason: (error as Error).message, }); } });

5. Stopping an Agent

To stop the agent and clean the cache, we can do the following:

typescript
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
app.post('/stop-ai-agent', async (req, res) => { const channelId = normalizeChannelId(req.body?.channel_id ?? ''); if (!channelId) { res.status(400).json({ error: 'Invalid channel_id' }); return; } try { await agentManager.stopAgent(buildAgentUserId(channelId)); res.json({ message: 'AI Agent stopped' }); } catch (error) { res.status(500).json({ error: 'Failed to stop AI Agent', reason: (error as Error).message, }); } });

6. Start the server

Add the log when the server is started.

typescript
1
2
3
4
const port = process.env.PORT || 3000; app.listen(port, () => { console.log(`Server is running on <http://localhost>:${port}`); });

You can start the server by running:

npm start

Android Integration

Next, let's setup things on the Android side. You can find a working implementation of this project here.

1. Add the required dependencies

Add the following dependencies to your build.gradle.kts file:

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
dependencies { // Stream Chat Android AI Compose implementation("io.getstream:stream-chat-android-ai-compose:$version") // Stream Chat Android SDK (required for chat functionality) implementation("io.getstream:stream-chat-android-client:$streamChatVersion") implementation("io.getstream:stream-chat-android-compose:$streamChatVersion") implementation("io.getstream:stream-chat-android-offline:$streamChatVersion") implementation("io.getstream:stream-chat-android-state:$streamChatVersion") // Networking (required for API calls to your backend) implementation("com.squareup.retrofit2:retrofit:$retrofitVersion") implementation("com.squareup.retrofit2:converter-moshi:$retrofitVersion") implementation("com.squareup.okhttp3:okhttp:$okHttpVersion") implementation("com.squareup.moshi:moshi:$moshiVersion") implementation("com.squareup.moshi:moshi-kotlin:$moshiVersion") // Jetpack Compose (required for UI) implementation(platform("androidx.compose:compose-bom:$composeBomVersion")) implementation("androidx.compose.ui:ui") implementation("androidx.compose.material3:material3") implementation("androidx.activity:activity-compose:$activityComposeVersion") implementation("androidx.lifecycle:lifecycle-runtime-ktx:$lifecycleVersion") implementation("androidx.lifecycle:lifecycle-viewmodel-compose:$lifecycleVersion") // Coroutines (required for async operations) implementation("org.jetbrains.kotlinx:kotlinx-coroutines-core:$coroutinesVersion") // Core Android implementation("androidx.core:core-ktx:$coreKtxVersion") // Optional: For debugging and development debugImplementation("androidx.compose.ui:ui-tooling-preview") implementation("com.squareup.okhttp3:logging-interceptor:$okHttpVersion") // For API call logging }

Note: Replace the version placeholders with actual versions. You can find the latest versions in the Maven Central repository. For Stream Chat libraries, check the releases page.

Optional dependencies:

  • ui-tooling-preview: Only needed for Compose previews in Android Studio
  • logging-interceptor: Useful for debugging API calls, can be omitted in production

2. Setup Stream Chat

Create your Application class (App.kt) and bootstrap ChatClient with the necessary plugins:

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
import android.app.Application import io.getstream.chat.android.client.ChatClient import io.getstream.chat.android.client.logger.ChatLogLevel import io.getstream.chat.android.models.User import io.getstream.chat.android.offline.plugin.factory.StreamOfflinePluginFactory import io.getstream.chat.android.state.plugin.config.StatePluginConfig import io.getstream.chat.android.state.plugin.factory.StreamStatePluginFactory class App : Application() { lateinit var chatDependencies: ChatDependencies private set override fun onCreate() { super.onCreate() chatDependencies = ChatDependencies( baseUrl = "<http://10.0.2.2:3000>", // Android emulator localhost enableLogging = BuildConfig.DEBUG, ) initializeStreamChat() } private fun initializeStreamChat() { val logLevel = if (BuildConfig.DEBUG) ChatLogLevel.ALL else ChatLogLevel.NOTHING val offlinePluginFactory = StreamOfflinePluginFactory(appContext = applicationContext) val statePluginFactory = StreamStatePluginFactory( config = StatePluginConfig(backgroundSyncEnabled = true, userPresence = true), appContext = applicationContext, ) val chatClient = ChatClient.Builder("YOUR_STREAM_API_KEY", applicationContext) .withPlugins(offlinePluginFactory, statePluginFactory) .logLevel(logLevel) .build() val user = User( id = "user_id", name = "User Name", image = "<https://example.com/avatar.png>" ) // Token should be generated on your backend val token = "USER_TOKEN_FROM_YOUR_BACKEND" chatClient.connectUser(user, token) .enqueue { result -> if (result.isFailure) { // Handle connection error } } } }

Key points:

  • ChatClient.Builder must contain your Stream API key. Enable offline and state plugins for better performance and real-time updates.
  • connectUser has to be called with a user token created on your server.
  • For Android emulator, use http://10.0.2.2:3000 to connect to localhost:3000 on your development machine.

3. Build the AI agent service layer

Create a repository interface and service implementation that centralizes all calls to your Stream Chat AI proxy. The service uses Retrofit for HTTP communication:

Get started! Activate your free Stream account today and start prototyping your chat app.

ChatAiRepository.kt:

This interface defines the contract for all AI-related operations. It's a clean abstraction that your ViewModels will use, without needing to know about HTTP, JSON, or network details. The repository pattern makes your code testable and maintainable.

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
interface ChatAiRepository { suspend fun startAIAgent( channelType: String, channelId: String, platform: String, model: String? = null, ): Result<Unit> suspend fun stopAIAgent(channelId: String): Result<Unit> suspend fun summarize( text: String, platform: String, model: String? = null, ): Result<String> }

ChatAiApi.kt:

This Retrofit interface maps Kotlin functions directly to your backend HTTP endpoints. Retrofit handles all the HTTP details---you just define the function signature and it does the rest. Each function corresponds to an endpoint we created in the backend.

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
import retrofit2.http.Body import retrofit2.http.POST interface ChatAiApi { @POST("/start-ai-agent") suspend fun startAIAgent(@Body request: StartAIAgentRequest): AIAgentResponse @POST("/stop-ai-agent") suspend fun stopAIAgent(@Body request: StopAIAgentRequest): AIAgentResponse @POST("/summarize") suspend fun summarize(@Body request: SummarizeRequest): SummarizeResponse }

Models.kt:

These data classes represent the request and response formats for your API. They match exactly what your Node.js backend expects and returns.

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
data class StartAIAgentRequest( val channel_type: String, val channel_id: String, val platform: String, val model: String?, ) data class StopAIAgentRequest( val channel_id: String, ) data class SummarizeRequest( val text: String, val platform: String, val model: String? = null, ) data class AIAgentResponse( val message: String, val data: List<String> = emptyList(), ) data class SummarizeResponse( val summary: String, ) data class ErrorResponse( val error: String, val reason: String? = null, )

ChatAiService.kt:

This is the concrete implementation of ChatAiRepository. It handles all the HTTP communication, error parsing, and converts network responses into Kotlin Result types. The executeApiCall wrapper provides consistent error handling across all API calls.

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
import com.squareup.moshi.Moshi import retrofit2.HttpException class ChatAiService( private val chatAiApi: ChatAiApi, private val moshi: Moshi, ) : ChatAiRepository { override suspend fun startAIAgent( channelType: String, channelId: String, platform: String, model: String?, ): Result<Unit> = executeApiCall { chatAiApi.startAIAgent( request = StartAIAgentRequest( channel_type = channelType, channel_id = channelId, platform = platform, model = model, ), ) } override suspend fun stopAIAgent(channelId: String): Result<Unit> = executeApiCall { chatAiApi.stopAIAgent( request = StopAIAgentRequest(channel_id = channelId), ) } override suspend fun summarize( text: String, platform: String, model: String?, ): Result<String> = executeApiCall { chatAiApi.summarize( request = SummarizeRequest( text = text, platform = platform, model = model, ), ).summary } // Generic error handling wrapper - converts exceptions to Result types private suspend fun <T> executeApiCall(apiCall: suspend () -> T): Result<T> { return try { Result.success(apiCall()) } catch (e: HttpException) { handleHttpException(e) // Parse HTTP error responses } catch (e: Exception) { Result.failure(e) // Handle network errors, timeouts, etc. } } private fun <T> handleHttpException(e: HttpException): Result<T> { val errorMessage = try { e.fromErrorResponse() ?: e.toHttpString() // Try to parse JSON error, fallback to HTTP status } catch (_: Exception) { e.toHttpString() } return Result.failure(RuntimeException(errorMessage)) } // Extension function to parse error response body as JSON private fun HttpException.fromErrorResponse(): String? { val errorBody = response()?.errorBody()?.string() ?: return null val errorResponse = moshi.adapter(ErrorResponse::class.java).fromJson(errorBody) ?: return null val reason = errorResponse.reason?.let { ": $it" }.orEmpty() return errorResponse.error + reason } } private fun HttpException.toHttpString(): String = "HTTP ${code()}: ${message()}"

NetworkModule.kt:

This utility class sets up all the networking infrastructure. It creates Moshi for JSON parsing, OkHttp for HTTP requests, and Retrofit for type-safe API calls. Having this in a separate module makes it easy to swap implementations or add features like caching later.

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
import com.squareup.moshi.Moshi import com.squareup.moshi.kotlin.reflect.KotlinJsonAdapterFactory import okhttp3.OkHttpClient import okhttp3.logging.HttpLoggingInterceptor import retrofit2.Retrofit import retrofit2.converter.moshi.MoshiConverterFactory class NetworkModule { fun createMoshi(): Moshi = Moshi.Builder() .addLast(KotlinJsonAdapterFactory()) .build() fun createOkHttpClient(enableLogging: Boolean = true): OkHttpClient = OkHttpClient.Builder() .apply { if (enableLogging) { addInterceptor( HttpLoggingInterceptor().apply { level = HttpLoggingInterceptor.Level.BODY }, ) } } .build() fun createRetrofit( baseUrl: String, okHttpClient: OkHttpClient, moshi: Moshi, ): Retrofit { // Retrofit requires base URLs to end with a slash val normalizedBaseUrl = if (baseUrl.endsWith("/")) { baseUrl } else { "$baseUrl/" } return Retrofit.Builder() .baseUrl(normalizedBaseUrl) .client(okHttpClient) .addConverterFactory(MoshiConverterFactory.create(moshi)) // Use Moshi for JSON conversion .build() } }

ChatDependencies.kt:

This class wires together all the networking components and creates the repository. It's initialized once in your Application class and provides a single entry point for all AI operations. The initialization happens in the init block, creating the dependency chain: NetworkModule → Retrofit → API → Service → Repository.

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
class ChatDependencies( baseUrl: String, enableLogging: Boolean = true, networkModule: NetworkModule = NetworkModule(), ) { val chatAiRepository: ChatAiRepository init { // Build the dependency chain: Moshi → OkHttp → Retrofit → API → Service → Repository val moshi = networkModule.createMoshi() val okHttpClient = networkModule.createOkHttpClient(enableLogging) val retrofit = networkModule.createRetrofit( baseUrl = baseUrl, okHttpClient = okHttpClient, moshi = moshi, ) val chatAiApi = retrofit.create(ChatAiApi::class.java) chatAiRepository = ChatAiService(chatAiApi, moshi) } }

Each chat channel kicks off its own AI agent (/start-ai-agent), and can request message summaries for conversation titles. You should update the baseUrl to your deployed backend service when you are ready to test and ship. For Android emulator, use http://10.0.2.2:3000 to connect to localhost:3000 on your development machine.


4. Create the ChatViewModel

The ChatViewModel manages chat conversation state and interactions. It handles message sending, AI agent management, and UI state updates by observing Stream Chat events:

ChatViewModel.kt:

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
import androidx.lifecycle.ViewModel import androidx.lifecycle.viewModelScope import io.getstream.chat.android.client.ChatClient import io.getstream.chat.android.client.channel.subscribeFor import io.getstream.chat.android.client.events.AIIndicatorClearEvent import io.getstream.chat.android.client.events.AIIndicatorStopEvent import io.getstream.chat.android.client.events.AIIndicatorUpdatedEvent import io.getstream.chat.android.client.events.ChatEvent import io.getstream.chat.android.client.extensions.cidToTypeAndId import io.getstream.chat.android.state.extensions.watchChannelAsState import kotlinx.coroutines.flow.* class ChatViewModel( private val chatClient: ChatClient, private val chatAiRepository: ChatAiRepository, conversationId: String?, ) : ViewModel() { private val cid = MutableStateFlow(conversationId) private val _uiState = MutableStateFlow(ChatUiState()) val uiState: StateFlow<ChatUiState> = _uiState.asStateFlow() init { // Reactive flow: when channel ID is available, set up observation and AI agent cid.filterNotNull() .onEach { _uiState.update { state -> state.copy(isLoading = state.messages.isEmpty()) } } .onEach(::startAIAgentForChannel) // Start AI agent for this channel .onEach { cid -> chatClient.channel(cid).subscribeFor<ChatEvent>(::handleChatEvent) // Listen for events } .flatMapLatest { cid -> chatClient.watchChannelAsState(cid = cid, messageLimit = 30) // Watch channel state } .filterNotNull() .flatMapLatest { channelState -> combine(channelState.channelData, channelState.messages) { channelState.toChannel() // Combine data and messages into channel object } } .onEach { channel -> // Convert Stream messages to UI messages and update state val messages = channel.messages .mapNotNull { message -> message.toChatMessage() } .reversed() // Newest first (bottom of list) _uiState.update { state -> state.copy( isLoading = false, title = channel.name.takeIf(String::isNotBlank) ?: "New Chat", messages = messages, ) } } .launchIn(viewModelScope) } fun onInputTextChange(text: String) { _uiState.update { state -> state.copy(inputText = text) } } fun sendMessage() { val text = _uiState.value.inputText.trim() if (text.isEmpty() || _uiState.value.assistantState.isBusy()) { return // Don't send empty messages or if assistant is busy } val message = StreamMessage( text = text, user = User(id = currentUserId.value), ) // Optimistically update UI (show message immediately) _uiState.update { state -> state.copy( messages = listOfNotNull(message.toChatMessage()) + state.messages, inputText = "", assistantState = ChatUiState.AssistantState.Thinking, // Show thinking indicator ) } val cid = cid.value if (cid == null) { // Create a new channel before sending the first message chatClient.createChannel( channelType = "messaging", channelId = UUID.randomUUID().toString(), memberIds = listOf(currentUserId.value), ).enqueue { result -> result.onSuccess { newChannel -> this@ChatViewModel.cid.value = newChannel.cid // Trigger channel observation sendMessage(newChannel.cid, message) } } } else { sendMessage(cid, message) // Channel exists, send directly } } fun stopStreaming() { val cid = cid.value ?: return chatClient.channel(cid) .sendEvent(EventType.AI_TYPING_INDICATOR_STOP) .enqueue() } private suspend fun startAIAgentForChannel(cid: String): Result<Unit> { val (channelType, channelId) = cid.cidToTypeAndId() return chatAiRepository.startAIAgent( channelType = channelType, channelId = channelId, platform = "openai", ) } // Handle Stream Chat events (typing indicators, new messages, etc.) private fun handleChatEvent(event: ChatEvent) { when (event) { is AIIndicatorUpdatedEvent -> { // AI state changed (thinking → generating, etc.) _uiState.update { state -> state.copy( assistantState = event.aiState.toAssistantState(), ) } } is AIIndicatorClearEvent, is AIIndicatorStopEvent -> { // AI finished or stopped _uiState.update { state -> state.copy(assistantState = ChatUiState.AssistantState.Idle) } } else -> Unit } } }

ChatUiState.kt:

This data class represents all the UI state for a chat conversation. It's what your Compose UI observes and reacts to. The nested Message and AssistantState enums provide type-safe ways to represent different message roles and AI states. The helper functions at the bottom make it easy to check assistant status and get the current message.

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
data class ChatUiState( val isLoading: Boolean = false, val title: String = "New Chat", val actions: List<Action> = emptyList(), val messages: List<Message> = emptyList(), val inputText: String = "", val attachments: Set<Uri> = emptySet(), val assistantState: AssistantState = AssistantState.Idle, ) { enum class Action { NewChat, DeleteChat, } data class Message( val id: String, val role: Role, val content: String, val attachments: List<Attachment>, val isGenerating: Boolean, ) { sealed class Role { data object Assistant : Role() data object User : Role() data object Other : Role() } } enum class AssistantState { Idle, Thinking, CheckingSources, Generating, Error, } } fun ChatUiState.AssistantState.isBusy(): Boolean = this != ChatUiState.AssistantState.Idle && this != ChatUiState.AssistantState.Error fun ChatUiState.getCurrentAssistantMessage(): ChatUiState.Message? = messages.firstOrNull()?.takeIf { message -> message.role == ChatUiState.Message.Role.Assistant }

The ChatViewModel automatically starts the AI agent when a channel is created, listens to Stream Chat events for typing indicators, and manages the conversation state. The StreamingText composable progressively reveals text content word-by-word with smooth animation, perfect for displaying AI-generated responses.

ChatViewModelFactory.kt:

ViewModels are created by the Android system, so we need a factory to inject our dependencies (repository, storage helper, etc.). This factory implements ViewModelProvider.Factory and creates ChatViewModel instances with all required dependencies.

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
import androidx.lifecycle.ViewModel import androidx.lifecycle.ViewModelProvider import io.getstream.chat.android.client.ChatClient import io.getstream.chat.android.compose.ui.util.StorageHelperWrapper class ChatViewModelFactory( private val chatAiRepository: ChatAiRepository, private val chatClient: ChatClient = ChatClient.instance(), private val conversationId: String?, private val storageHelper: StorageHelperWrapper, ) : ViewModelProvider.Factory { @Suppress("UNCHECKED_CAST") override fun <T : ViewModel> create(modelClass: Class<T>): T { require(modelClass.isAssignableFrom(ChatViewModel::class.java)) { "Unknown ViewModel class: ${modelClass.name}" } return ChatViewModel( chatClient = chatClient, chatAiRepository = chatAiRepository, conversationId = conversationId, storageHelper = storageHelper, ) as T } }

The StorageHelperWrapper is provided by the Stream Chat Compose SDK and helps with reading attachment files from URIs.


5. Build the Conversation UI

Building the Conversation UI is straightforward with the available components from the Stream Chat Compose SDK and the AI components.

ChatScreen.kt:

This is the main chat screen composable. It observes the ViewModel state and automatically recomposes when messages arrive or the AI starts typing. The LazyColumn with reverseLayout = true shows newest messages at the bottom (like most chat apps). The AITypingIndicator appears when the assistant is thinking, and StreamingText creates the word-by-word animation effect for AI responses.

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
import androidx.compose.foundation.layout.* import androidx.compose.foundation.lazy.LazyColumn import androidx.compose.foundation.lazy.items import androidx.compose.material3.* import androidx.compose.runtime.* import androidx.compose.ui.Modifier import androidx.lifecycle.viewmodel.compose.viewModel import io.getstream.chat.android.ai.compose.ui.component.AITypingIndicator import io.getstream.chat.android.ai.compose.ui.component.ChatComposer import io.getstream.chat.android.ai.compose.ui.component.StreamingText @Composable fun ChatScreen( conversationId: String?, chatDependencies: ChatDependencies, onMenuClick: () -> Unit, ) { val chatViewModel = viewModel<ChatViewModel>( key = conversationId, factory = ChatViewModelFactory( chatAiRepository = chatDependencies.chatAiRepository, conversationId = conversationId, storageHelper = StorageHelperWrapper(LocalContext.current.applicationContext), ), ) val state by chatViewModel.uiState.collectAsState() val isAssistantBusy = state.assistantState.isBusy() val currentAssistantMessage = state.getCurrentAssistantMessage() Scaffold( topBar = { ChatTopBar( title = state.title, onMenuClick = onMenuClick, ) }, ) { paddingValues -> LazyColumn( modifier = Modifier .fillMaxSize() .padding(paddingValues), reverseLayout = true, ) { // Assistant loading indicator item(key = "assistant_indicator") { when (state.assistantState) { ChatUiState.AssistantState.Error -> { Text( text = "Sorry, I encountered an error. Please try again.", color = MaterialTheme.colorScheme.error, modifier = Modifier.padding(horizontal = 16.dp, vertical = 6.dp), ) } else -> { // Show typing indicator with appropriate label based on AI state val label = when (state.assistantState) { ChatUiState.AssistantState.Thinking -> "Thinking" ChatUiState.AssistantState.CheckingSources -> "Checking sources" ChatUiState.AssistantState.Generating -> // Only show "Generating" if no message has started yet if (currentAssistantMessage == null || currentAssistantMessage.content.isEmpty()) { "Generating response" } else null else -> null } if (label != null) { AITypingIndicator( modifier = Modifier.padding(horizontal = 16.dp, vertical = 6.dp), label = { Text(label) }, ) } } } } // Messages items( key = ChatUiState.Message::id, items = state.messages, ) { message -> ChatMessageItem( modifier = Modifier.padding(horizontal = 16.dp, vertical = 8.dp), message = message, ) } } ChatComposer( modifier = Modifier.padding(paddingValues), text = state.inputText, attachments = state.attachments, onAttachmentsAdded = chatViewModel::onAttachmentsAdded, onAttachmentRemoved = chatViewModel::onAttachmentRemoved, onTextChange = chatViewModel::onInputTextChange, onSendClick = chatViewModel::sendMessage, onStopClick = chatViewModel::stopStreaming, isStreaming = isAssistantBusy, ) } }

The ChatMessageItem composable renders individual messages using StreamingText for assistant messages:

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
@Composable fun ChatMessageItem( message: ChatUiState.Message, modifier: Modifier = Modifier, ) { val isUser = message.role is ChatUiState.Message.Role.User Row( modifier = modifier.fillMaxWidth(), horizontalArrangement = if (isUser) Arrangement.End else Arrangement.Start, ) { when (message.role) { ChatUiState.Message.Role.Assistant -> { StreamingText( text = message.content, animate = message.isGenerating, // Animate when generating ) } ChatUiState.Message.Role.User -> { Spacer(modifier = Modifier.weight(.2f)) MessageBubble(modifier = Modifier.weight(.8f, fill = false)) { StreamingText( text = message.content, animate = false, // User messages don't animate ) } } ChatUiState.Message.Role.Other -> { MessageBubble(modifier = Modifier.weight(.8f, fill = false)) { StreamingText( text = message.content, animate = false, ) } Spacer(modifier = Modifier.weight(.2f)) } } } } @Composable private fun MessageBubble( modifier: Modifier = Modifier, content: @Composable BoxScope.() -> Unit, ) { Box( modifier = modifier .clip(MaterialTheme.shapes.large) .background(MaterialTheme.colorScheme.tertiary.copy(alpha = 0.2f)) .padding(horizontal = 16.dp, vertical = 12.dp), ) { content() } }

AiChatApp.kt:

This is the root composable that manages the entire app structure. It sets up a navigation drawer for the conversation list and coordinates navigation between different chats. The ViewModelStore ensures that ViewModels are properly scoped to each conversation, so state is preserved when switching between chats.

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
@Composable fun AiChatApp( chatDependencies: ChatDependencies, modifier: Modifier = Modifier, ) { val conversationListViewModel = viewModel<ConversationListViewModel>( factory = ConversationListViewModelFactory() ) val conversationListState by conversationListViewModel.uiState.collectAsState() val drawerState = rememberDrawerState(initialValue = DrawerValue.Closed) val scope = rememberCoroutineScope() var selectedConversationId by rememberSaveable { mutableStateOf<String?>(null) } var newChatRevision by rememberSaveable { mutableIntStateOf(0) } DismissibleNavigationDrawer( modifier = modifier, drawerState = drawerState, drawerContent = { ChatDrawer( conversations = conversationListState.conversations, selectedConversationId = selectedConversationId, onNewChatClick = { selectedConversationId = null newChatRevision++ scope.launch { drawerState.close() } }, onConversationClick = { conversationId -> selectedConversationId = conversationId scope.launch { drawerState.close() } }, ) }, ) { val navigationKey = selectedConversationId ?: "new-chat-$newChatRevision" AnimatedContent(targetState = navigationKey) { _ -> ViewModelStore(navigationKey) { ChatScreen( conversationId = selectedConversationId, chatDependencies = chatDependencies, onMenuClick = { scope.launch { drawerState.open() } }, ) } } } }

Conclusion

In this tutorial, you learned how to build a complete AI-powered chat experience by combining Stream Chat, the Stream Chat AI SDK, and a customizable Node.js backend.

We set up an AI agent capable of generating responses and streaming typing indicators, then integrated it into an Android app using Stream's Compose components for a polished, real-time chat interface.

With a few endpoints, a lightweight service layer, and fully customizable UI, you now have a foundation for creating rich conversational AI experiences on Android using any LLM provider you prefer.

Integrating Video With Your App?
We've built a Video and Audio solution just for you. Check out our APIs and SDKs.
Learn more ->