This tutorial guides you through building an AI assistant seamlessly integrated with the Stream Chat SDK for Jetpack Compose. You'll learn how to handle interactions on both the client and server sides by setting up and running your own simple backend.
The AI assistant leverages Stream's edge network for optimal performance and uses APIs from Anthropic and OpenAI as its LLM, but using this method, developers can integrate any LLM service with Stream Chat and benefit from all of the same features like generation indicators, markdown support, tables, etc.
For hobby projects and small businesses, Stream offers a free Maker Plan, making it accessible for innovative projects at any scale.
Key Features
The Android AI assistant app you'll build in this tutorial comes with the following main features:
A Channel Screen that allows users to browse channels in a list, with functionality for searching and swipe-to-delete actions.
An AI Chat Screen where users can interact with the AI assistant in real-time, enhanced by smooth animations for a seamless experience.
An AI Typing Indicator that dynamically reflects the assistant's current status—whether it's "thinking" or "generating"—keeping users informed about the AI's activity.
1. Chat SDK Installation
To begin, open Android Studio (Ladybug or newer) and create a new project with the following settings:
- Choose the
Empty Activity
template. - Set the project name to
AIChatTutorial
. - Define the package name as
com.example.aichattutorial
.
After creating and loading the project, you'll need to add the necessary dependencies for Stream Chat Compose SDK and AI assistant. The Stream SDKs required for this tutorial are available from MavenCentral, ensuring a straightforward setup for your development environment.
Open the build.gradle.kts
file in your app module (or build.gradle
if you're using the older Groovy DSL). Add the following three dependencies to configure your project:
1234567dependencies { val streamChat = "6.7.0" implementation("io.getstream:stream-chat-android-ai-assistant:$streamChat") implementation("io.getstream:stream-chat-android-compose:$streamChat") .. // Other Jetpack Compose dependencies }
Next, open the settings.gradle
and then configure the dependency repositories like the below:
12345678910111213141516pluginManagement { repositories { google() mavenCentral() gradlePluginPortal() maven(url = "https://jitpack.io") } } dependencyResolutionManagement { repositoriesMode.set(RepositoriesMode.FAIL_ON_PROJECT_REPOS) repositories { google() mavenCentral() maven(url = "https://jitpack.io") } }
2. Running the Backend Server
Before implementing advanced AI features in your Android app, you'll need to set up a Node.js backend server. To streamline the process, this tutorial uses a pre-built Node.js backend available on GitHub, minimizing configuration overhead.
This backend operates as follows:
- When the AI agent is active, it listens for new messages and forwards them to OpenAI for processing. The response is sent back to the client as a message, with updates to the message text.
- When the AI agent is stopped, all listeners are disposed, halting any further responses to the client.
The backend leverages the Anthropic API and OpenAI's new Assistants API, showcasing features like function calling. By default, Anthropic is used, but OpenAI can be specified by passing openai
as the platform parameter in the start-ai-agent
request.
Additionally, the sample backend supports various states for the typing indicator, such as "Thinking" or "Checking external sources," to enhance user experience.
To run the server locally, clone the repository using the following command:
1git clone https://github.com/GetStream/ai-assistant-nodejs.git
Next, set up the .env
file by adding the following keys to ensure proper configuration:
1234ANTHROPIC_API_KEY=insert_your_key STREAM_API_KEY=insert_your_key STREAM_API_SECRET=insert_your_secret OPENAI_API_KEY=insert_your_key OPENWEATHER_API_KEY=insert_your_key
The STREAM_API_KEY
and STREAM_API_SECRET
can be obtained from your app's dashboard on the Stream platform. To acquire an ANTHROPIC_API_KEY
, create an account on Anthropic. Alternatively, if you prefer using OpenAI, you can generate an OPENAI_API_KEY
by signing up on OpenAI's platform.
This example also demonstrates function calling using OpenAI, enabling the backend to execute specific functions when certain queries are detected. For instance, you can ask, "What’s the weather like?" in a particular location. To support this feature, you'll need an API key from OpenWeather or another weather service. If using a different service, you'll need to update the backend request configuration accordingly.
Once your keys are set, install the required dependencies by running the following command in your project root:
1npm install
This will start listening to requests on localhost:3000
.
3. Initialize the ChatClient
Now, return to Android Studio to set up the ChatClient
. For simplicity, you can initialize everything in the Application
class. Here's an example of how to do this:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748class App : Application() { override fun onCreate() { super.onCreate() // initialise the Stream logger AndroidStreamLogger.installOnDebuggableApp(this) /** * initialize a global instance of the [ChatClient]. * The ChatClient is the main entry point for all low-level operations on chat. * e.g, connect/disconnect user to the server, send/update/pin message, etc. */ val logLevel = if (BuildConfig.DEBUG) ChatLogLevel.ALL else ChatLogLevel.NOTHING val offlinePluginFactory = StreamOfflinePluginFactory( appContext = applicationContext ) val statePluginFactory = StreamStatePluginFactory( config = StatePluginConfig( backgroundSyncEnabled = true, userPresence = true ), appContext = applicationContext ) val chatClient = ChatClient.Builder("zcgvnykxsfm8", applicationContext) .withPlugins(offlinePluginFactory, statePluginFactory) .logLevel(logLevel) .build() val user = User( id = "AIStreamUser1", name = "AI Android Stream", image = "https://picsum.photos/id/${Random.nextInt(1000)}/300/300" ) val token = "replace_with_user_token" chatClient.connectUser(user, token).enqueue(object : Call.Callback<ConnectionData> { override fun onResult(result: io.getstream.result.Result<ConnectionData>) { if (result.isFailure) { streamLog { "Can't connect user. Please check the app README.md and ensure " + "**Disable Auth Checks** is ON in the Dashboard" } } } }) } }
- Create a StreamOfflinePluginFactory to enable offline functionality. This uses the
OfflinePlugin
class, which implements a robust caching mechanism based on side effects integrated intoChatClient
functions. - Establish a connection to Stream by initializing the
ChatClient
with an API key. The provided key is configured for a tutorial environment, but you can easily sign up for a free Chat trial to obtain your own API key for production use. - Integrate the OfflinePluginFactory into the
ChatClient
by using thewithPlugin
method, enabling offline storage capabilities. For production-grade applications, it is recommended to initialize theChatClient
in yourApplication
class to ensure proper lifecycle management. - Connect a user to the
ChatClient
instance by creating aUser
object. To establish the connection, call theconnectUser
method and provide an authorization token. Ideally, this token should be securely generated by your backend server as part of the authentication process. For a comprehensive understanding of connecting and authenticating users, refer to the auth & connect docs. However, for quick setup and testing, you can use the Stream Token Generator to create an authorized token. This allows you to streamline the process and validate functionality without implementing a full authentication flow.
4. Configure Network Module
To communicate with the AI backend server, you'll need to create a network module to handle HTTP requests. In this example, we use Retrofit for seamless API integration.
Before proceeding, make sure to identify your local IP address by running the following command in your terminal:
1ifconfig | grep "inet " | grep -Fv 127.0.0.1 | awk '{print $2}'
Next, create a file named AiService
. This file will define the HTTP requests needed to interact with the AI backend server, using Retrofit for API calls:
12345678910111213141516171819interface AiService { @POST("start-ai-agent") suspend fun startAiAgent(@Body request: AiAgentRequest): AiAgentResponse @POST("stop-ai-agent") suspend fun stopAiAgent(@Body request: AiAgentRequest): AiAgentResponse } @Serializable data class AiAgentRequest( val channel_id: String, val channel_type: String = "messaging" ) @Serializable data class AiAgentResponse( val message: String, val data: List<String> )
Next, create a file named NetworkModule
. This file will be responsible for initializing the Retrofit instance and providing the AiService
implementation for making API calls:
1234567object NetworkModule { private val retrofit = Retrofit.Builder() .baseUrl("YOUR_LOCAL_IP_ADDRESS:3000/") // ex) http://192.115.12.248:3000 .addConverterFactory(Json.asConverterFactory("application/json".toMediaType())) .build() val aiService = retrofit.create<AiService>() }
Now, it’s ready to communicate with the backend server. Let’s move on implementing the channel and messages screen.
5. Building a Channel List
You have two approaches for creating the UI of the channel list:
- Using Stream's Low-Level API: This allows you to build a fully custom UI on top of Stream's state management layer, giving you maximum flexibility.
- Using Pre-Made UI Components: Stream provides ready-to-use UI components that simplify implementation. Most developers combine these two approaches, leveraging pre-made components where possible while customizing specific elements to meet design requirements.
To see how straightforward it is to implement a ChannelsScreen
, add the following code to the MainActivity.kt
file:
123456789101112131415161718192021222324252627282930313233343536373839class MainActivity : ComponentActivity() { private val mainViewModel: MainViewModel by viewModels() override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) setContent { val clientInitialisationState by ChatClient.instance().clientState.initializationState.collectAsStateWithLifecycle() when (clientInitialisationState) { InitializationState.COMPLETE -> { ChatTheme { ChannelsScreen( title = stringResource(id = R.string.app_name), isShowingHeader = true, onHeaderActionClick = { mainViewModel.createChannel() }, onChannelClick = { channel -> startActivity(MessageActivity.getIntent(this, channel.cid)) }, onBackPressed = { finish() } ) } } InitializationState.INITIALIZING -> { Box(modifier = Modifier.fillMaxSize()) { CircularProgressIndicator( modifier = Modifier.align(Alignment.Center) ) } } InitializationState.NOT_INITIALIZED -> { Text(text = "Not initialized...") } } } } }
Next, you'll create a MainViewModel
to handle the creation of a new empty channel. This channel will serve as a dedicated space for chatting with the AI bot.
12345678910111213141516171819class MainViewModel : ViewModel() { private val chatClient by lazy { ChatClient.instance() } fun createChannel() { viewModelScope.launch { val number = Random.nextInt(10000) chatClient.createChannel( channelType = "messaging", channelId = "channel$number", memberIds = listOf(chatClient.getCurrentUser()?.id.orEmpty()), extraData = mapOf() ).await().onSuccess { streamLog { "Created a new channel" } }.onError { streamLog { "error: $it" } } } } }
You may have noticed an error with startActivity
because the MessageActivity
hasn’t been created yet. Let's move on to implementing the AI message screen.
6. Building an AI Messages Screen
Now it’s time to implement an AI message screen that allows you to have a conversation with an AI bot. The stream-chat-android-ai-assistant
package provides a versatile AI assistant UI components, such as AiMessagesScreen
, AiTypingIndicator
, and more. You can simply implement a pre-built AI messages screen which has the key features below:
- Fully supports markdown formats, such as code highlighting, tables, LaTeX formulas.
- Real-time display of AI-generated assistant messages accompanied by dynamic typing animation.
- Seamless integration with the message screens with the
AiTypingIndicator
to represent the current states of the LLM. - Complete implementation of UI elements using Jetpack Compose for a modern and declarative approach.
First thing first, you should create a MessageActivity
and then, you can quickly implement the AI messages screen by using the AiMessagesScreen
UI component. Here's an example implementation within the MessageActivity
:
123456789101112131415161718192021222324252627282930313233343536373839404142434445class MessageActivity : ComponentActivity() { private val cid by lazy { intent.getStringExtra(KEY_CHANNEL_ID)!! } private val messageViewModel: MessageViewModel by viewModels() override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) messageViewModel.subscribeEvents(cid = cid) val viewModelFactory = MessagesViewModelFactory( context = this, channelId = cid, messageLimit = 30 ) setContent { ChatTheme { val isAiStarted by messageViewModel.isAiStarted.collectAsStateWithLifecycle() val typingState by messageViewModel.typingState.collectAsStateWithLifecycle() Box(modifier = Modifier.fillMaxSize()) { AiMessagesScreen( isAiStarted = isAiStarted, viewModelFactory = viewModelFactory, onStartAiAssistant = { messageViewModel.startAiAssistant(cid = cid) }, onStopAiAssistant = { messageViewModel.stopAiAssistant(cid = cid) }, onBackPressed = { finish() }, typingState = typingState ) } } } } companion object { private const val KEY_CHANNEL_ID = "channelId" fun getIntent(context: Context, channelId: String): Intent { return Intent(context, MessageActivity::class.java).apply { putExtra(KEY_CHANNEL_ID, channelId) } } } }
After implementing the MessageActivity
, make sure that you’ve declared it on the AndroidManifest.xml
file like the below:
123<activity android:name=".ui.screen.messages.MessageActivity" android:windowSoftInputMode="adjustResize" />
Finally, create the MessageViewModel
, which manages starting and stopping the AI agent while also subscribing to typing state events from the Stream server. You can implement this functionality using the following code:
1234567891011121314151617181920212223242526272829303132333435363738394041424344class MessageViewModel : ViewModel() { private val chatClient by lazy { ChatClient.instance() } private val _isAiStarted: MutableStateFlow<Boolean> = MutableStateFlow(false) val isAiStarted: StateFlow<Boolean> = _isAiStarted private val _typingState: MutableStateFlow<TypingState> = MutableStateFlow(TypingState.Nothing) val typingState: StateFlow<TypingState> = _typingState fun subscribeEvents(cid: String) { chatClient.channel(cid).subscribeFor( AIIndicatorUpdatedEvent::class.java, AIIndicatorClearEvent::class.java, AIIndicatorStopEvent::class.java ) { event -> if (event is AIIndicatorUpdatedEvent) { _typingState.value = event.aiState.toTypingState(event.messageId) } else if (event is AIIndicatorClearEvent) { _typingState.value = TypingState.Clear } } } fun startAiAssistant(cid: String) { val (_, id) = cid.cidToTypeAndId() viewModelScope.launch { _isAiStarted.value = true NetworkModule.aiService.startAiAgent( request = AiAgentRequest(id) ) } } fun stopAiAssistant(cid: String) { val (_, id) = cid.cidToTypeAndId() viewModelScope.launch { _isAiStarted.value = false NetworkModule.aiService.stopAiAgent( request = AiAgentRequest(id) ) } } }
Everything is now set up. Once you build the project, you’ll see the following result:
Conclusion
This tutorial has guided you through building a feature-rich in-app chat experience using Android Jetpack Compose, complete with a seamlessly integrated AI assistant bot. The app leverages Stream's global edge network for optimal performance and scalability while supporting advanced AI functionalities. You can find the entire source code to add an assistant to your app on GitHub.
For more details about our AI capabilities, visit the AI landing page. To explore additional customization options for your chat apps, check out our comprehensive Android documentation.
Stream offers a free development plan, and for hobby projects or small apps, we provide an extended free maker plan. For more information, take a look at the available pricing tiers.