This tutorial teaches you how to build a Zoom/Whatsapp-style video calling app.
- Calls run on Stream's global edge network for optimal latency & reliability.
- Permissions give you fine-grained control over who can do what.
- Video quality and codecs are automatically optimized.
- Powered by Stream's Video Calling API.
- UI components are fully customizable, as demonstrated in the Android Video Cookbook.
Step 1 - Create a New Project in Android Studio
- Create a New Project.
- Select Phone & Tablet -> Empty Activity.
- Name your project VideoCall.
⚠️ Note: This tutorial's sample project uses Android Studio Ladybug. The setup steps can vary slightly across Android Studio versions. We recommend using Android Studio Ladybug or newer.
Step 2 - Install the SDK & Setup the Client
The Stream Video SDK has two main artifacts:
- Core Client:
io.getstream:stream-video-android-core
: Includes only the core part of the SDK. - Compose UI Components:
io.getstream:stream-video-android-ui-compose
: Includes the core + Compose UI components.
For this tutorial, we'll use the Compose UI Components.
Add the Video Compose SDK dependency to the app/build.gradle.kts
file.
If you're new to Android, note that there are 2 build.gradle.kts
files, you want to open the one located in the app
folder.
123456dependencies { // Stream Video Compose SDK implementation("io.getstream:stream-video-android-ui-compose:<latest_version>") // ... }
⚠️ Replace <latest-version>
with the version number indicated below. Also, you can check the Releases page.
⚠️ Make sure compileSdk
or compileSdkVersion
(if you're using an older syntax) is set to 35
or newer in your app/build.gradle.kts
file.
12345android { // ... compileSdk = 35 // ... }
⚠️ Add the INTERNET
permission in the AndroidManifest.xml
file, before the application
tag:
1<uses-permission android:name="android.permission.INTERNET" />
⚠️ If you get Compose-related errors when building your project, expand the section below and follow the steps.
- Add the following dependencies in the
app/build.gradle.kts
file, if needed:
1234567891011121314dependencies { // ... // Jetpack Compose (skip if already added by Android Studio) implementation(platform("androidx.compose:compose-bom:2023.10.01")) implementation("androidx.activity:activity-compose:1.9.0") implementation("androidx.compose.ui:ui") implementation("androidx.compose.ui:ui-tooling") implementation("androidx.compose.runtime:runtime") implementation("androidx.compose.foundation:foundation") implementation("androidx.compose.material:material") // ... }
- Add the following lines to the
app/build.gradle.kts
file, if needed:
12345678910android { // ... buildFeatures { compose = true } composeOptions { kotlinCompilerExtensionVersion = "1.4.3" // or newer } // ... }
- Make sure you're using a Kotlin version that's compatible with the Compose compiler version specified by
kotlinCompilerExtensionVersion
above. Check this page to see which Kotlin version is compatible with your Compose Compiler version. If needed, change the Kotlin version in your project'sbuild.gradle.kts
file:
12345plugins { // ... id("org.jetbrains.kotlin.android") version "1.8.10" apply false // ... }
⚠️ Make sure you sync the project after doing these changes. Click on the Sync Now button above the file contents.
Step 3 - Create & Join a call
To keep this tutorial short and easy to understand, we'll place all the code in MainActivity.kt
. For a production app, you'd want to initialize the client in your Application
class or DI module and use a View Model.
Open MainActivity.kt
and replace the MainActivity
class with the code below. You can delete the other functions that Android Studio created.
Also, expand the section below to view the import
statements used throughout this tutorial. To follow along easily, replace the existing import
statements in your file.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748import android.os.Bundle import android.widget.Toast import androidx.activity.ComponentActivity import androidx.activity.compose.setContent import androidx.compose.foundation.background import androidx.compose.foundation.layout.Box import androidx.compose.foundation.layout.Column import androidx.compose.foundation.layout.fillMaxSize import androidx.compose.foundation.layout.padding import androidx.compose.foundation.layout.size import androidx.compose.foundation.shape.CircleShape import androidx.compose.material3.Text import androidx.compose.runtime.collectAsState import androidx.compose.runtime.getValue import androidx.compose.runtime.mutableStateOf import androidx.compose.runtime.remember import androidx.compose.runtime.setValue import androidx.compose.ui.Alignment import androidx.compose.ui.Modifier import androidx.compose.ui.graphics.Color import androidx.compose.ui.layout.onSizeChanged import androidx.compose.ui.text.TextStyle import androidx.compose.ui.text.style.TextAlign import androidx.compose.ui.unit.IntSize import androidx.compose.ui.unit.dp import androidx.compose.ui.unit.sp import androidx.lifecycle.lifecycleScope import io.getstream.video.android.compose.permission.LaunchCallPermissions import io.getstream.video.android.compose.theme.StreamColors import io.getstream.video.android.compose.theme.StreamDimens import io.getstream.video.android.compose.theme.StreamShapes import io.getstream.video.android.compose.theme.StreamTypography import io.getstream.video.android.compose.theme.VideoTheme import io.getstream.video.android.compose.theme.VideoTheme.colors import io.getstream.video.android.compose.theme.VideoTheme.dimens import io.getstream.video.android.compose.ui.components.call.activecall.CallContent import io.getstream.video.android.compose.ui.components.call.controls.ControlActions import io.getstream.video.android.compose.ui.components.call.controls.actions.FlipCameraAction import io.getstream.video.android.compose.ui.components.call.controls.actions.ToggleCameraAction import io.getstream.video.android.compose.ui.components.call.controls.actions.ToggleMicrophoneAction import io.getstream.video.android.compose.ui.components.call.renderer.FloatingParticipantVideo import io.getstream.video.android.compose.ui.components.call.renderer.ParticipantVideo import io.getstream.video.android.compose.ui.components.video.VideoRenderer import io.getstream.video.android.core.GEO import io.getstream.video.android.core.RealtimeConnection import io.getstream.video.android.core.StreamVideoBuilder import io.getstream.video.android.model.User import kotlinx.coroutines.launch
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960class MainActivity : ComponentActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) val apiKey = "REPLACE_WITH_API_KEY" val userToken = "REPLACE_WITH_TOKEN" val userId = "REPLACE_WITH_USER_ID" val callId = "REPLACE_WITH_CALL_ID" // Create a user. val user = User( id = userId, // any string name = "Tutorial", // name and image are used in the UI image = "https://bit.ly/2TIt8NR", ) // Initialize StreamVideo. For a production app, we recommend adding the client to your Application class or di module. val client = StreamVideoBuilder( context = applicationContext, apiKey = apiKey, geo = GEO.GlobalEdgeNetwork, user = user, token = userToken, ).build() setContent { // Request permissions and join a call, which type is `default` and id is `123`. val call = client.call(type = "default", id = callId) LaunchCallPermissions( call = call, onAllPermissionsGranted = { // All permissions are granted so that we can join the call. val result = call.join(create = true) result.onError { Toast.makeText(applicationContext, it.message, Toast.LENGTH_LONG).show() } } ) // Apply VideoTheme VideoTheme { // Define required properties. val participants by call.state.participants.collectAsState() val connection by call.state.connection.collectAsState() // Render local and remote videos. Box( contentAlignment = Alignment.Center, modifier = Modifier.fillMaxSize() ) { if (connection != RealtimeConnection.Connected) { Text("Loading...", fontSize = 30.sp) } else { Text("Call ${call.id} has ${participants.size} participants", fontSize = 30.sp) } } } } } }
We need a valid user token typically generated by your server-side API for a production app to run this sample code. When a user logs in to your app, you return the user token that gives them access to the call. As you can see in the code snippet, we've generated a user token for you to make this tutorial easier to follow.
When you run the sample app, it will connect successfully. The text will say, "Call <ID> has 1 participant" (yourself).
Let's review what we did in the code above.
Create a user First, we create a user instance. You typically sync these users via a server-side integration on your backend. Alternatively, you can also use guest or anonymous users.
12345val user = User( id = userId, // any string name = "Tutorial", // name and image are used in the UI image = "https://bit.ly/2TIt8NR", )
Initialize the Stream Video Client. Next, we initialize the video client by passing the API Key, user, and user token.
1234567val client = StreamVideoBuilder( context = applicationContext, apiKey = apiKey, geo = GEO.GlobalEdgeNetwork, user = user, token = userToken, ).build()
Create a Call. After the user and client are created, we create a call.
1val call = client.call("default", callId)
Request Runtime Permissions. Before joining the call, we request a camera and microphone runtime permissions to capture video and audio.
123456LaunchCallPermissions( call = call, onAllPermissionsGranted = { // ... } )
Review the permissions docs to learn more about how you can easily request permissions.
Join a Call. We join a call in the onAllPermissionsGranted
block.
12345678910LaunchCallPermissions( call = call, onAllPermissionsGranted = { // All permissions are granted so we can join the call. val result = call.join(create = true) result.onError { Toast.makeText(applicationContext, it.message, Toast.LENGTH_LONG).show() } } )
Video and audio connections will be set up when you use call.join()
.
Define the UI.
Lastly, you can render the UI by observing call.state
(participants and connection states).
12val participants by call.state.participants.collectAsState() val connection by call.state.connection.collectAsState()
You'll find all relevant states for a call in call.state
and call.state.participants
.
The documentation on Call state and Participant state explains this in further detail.
Step 4 - Joining from the Web
Let's join the call from your browser to make this a little more interactive.
On your Android device, you'll see the text update to 2 participants. Let's keep the browser tab open as you go through the tutorial.
Step 5 - Rendering Video
In this next step, we're going to render the local & remote participant video feeds.
In the MainActivity.kt
file, replace the code inside VideoTheme
with the example below:
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849VideoTheme { val remoteParticipants by call.state.remoteParticipants.collectAsState() val remoteParticipant = remoteParticipants.firstOrNull() val me by call.state.me.collectAsState() val connection by call.state.connection.collectAsState() var parentSize: IntSize by remember { mutableStateOf(IntSize(0, 0)) } Box( contentAlignment = Alignment.Center, modifier = Modifier .fillMaxSize() .background(VideoTheme.colors.baseSenary) .onSizeChanged { parentSize = it } ) { if (remoteParticipant != null) { ParticipantVideo( modifier = Modifier.fillMaxSize(), call = call, participant = remoteParticipant ) } else { if (connection != RealtimeConnection.Connected) { Text( text = "waiting for a remote participant...", fontSize = 30.sp, color = VideoTheme.colors.basePrimary ) } else { Text( modifier = Modifier.padding(30.dp), text = "Join call ${call.id} in your browser to see the video here", fontSize = 30.sp, color = VideoTheme.colors.basePrimary, textAlign = TextAlign.Center ) } } // floating video UI for the local video participant me?.let { localVideo -> FloatingParticipantVideo( modifier = Modifier.align(Alignment.TopEnd), call = call, participant = localVideo, parentBounds = parentSize ) } } }
When you run the app, you'll see your local video in a floating video element and the video from your browser. The result should look somewhat like this:
Let's review the changes we made.
ParticipantVideo renders a participant based on ParticipantState in a call. If the participant's track is not null and is correctly published, it renders the participant's video or a user avatar if no video is to be shown. If you want to use a lower-level component, you can see the VideoRenderer as well.
12345VideoRenderer( modifier = Modifier.weight(1f), call = call, video = remoteVideo )
It only displays the video and doesn't add any other UI elements. The video is lazily loaded and only requested from the video infrastructure if you display it. So, if you have a video call with 200 participants and show only 10 of them, you'll only receive video for 10 participants. This is how software like Zoom and Google Meet make large calls work.
FloatingParticipantVideo renders a draggable display of your own video.
123456FloatingParticipantVideo( modifier = Modifier.align(Alignment.TopEnd), call = call, participant = localVideo, parentBounds = parentSize )
Step 6 - Render a Full Video Calling UI
The above example showed how to use the call state object and Compose to build a basic video UI. For a production version of calling you'd want a few more UI elements:
- Indicators of when someone is speaking.
- Quality of their network.
- Layout support for >2 participants.
- Labels for the participant names.
- Call header and controls.
Stream ships with several Compose components to make this easy. You can customize the components by theming, arguments, and swapping parts. This is convenient if you want to quickly build a production-ready calling experience for your app. (And if you need more flexibility, many customers use the above low-level approach to build a UI from scratch.)
To render a complete calling UI, we'll leverage the CallContent component. This includes sensible defaults for a call header, video grid, call controls, picture-in-picture, and everything you need to build a video call screen.
Open MainActivity.kt
, and update the code inside VideoTheme
to use the CallContent
.
The code will be a lot smaller since all UI logic is handled in the CallContent
:
1234567VideoTheme { CallContent( modifier = Modifier.fillMaxSize(), call = call, onBackPressed = { onBackPressed() }, ) }
The result will be:
You'll see a more polished video UI when you run your app. It supports reactions, screen sharing, active speaker detection, network quality indicators, etc. The most commonly used UI components are:
- VideoRenderer: For rendering video and automatically requesting video tracks when needed. Most of the Video components are built on top of this.
- ParticipantVideo: The participant's video + some UI elements for network quality, reactions, speaking etc.
- ParticipantsGrid: A grid of participant video elements.
- FloatingParticipantVideo: A draggable version of the participant video. Typically used for your own video.
- ControlActions: A set of buttons for controlling your call, such as changing audio and video states.
- RingingCallContent: UI for displaying incoming and outgoing calls.
The full list of UI components is available in the docs.
Step 7 - Customizing the UI
You can customize the UI by:
- Building your UI components (the most flexible, build anything).
- Mixing and matching with Stream's UI Components (speeds up how quickly you can build common video UIs).
- Theming (basic customization of colors, fonts, etc.).
The example below shows how to swap out the call controls for a custom implementation:
12345678910111213141516171819202122232425262728293031323334353637VideoTheme { val isCameraEnabled by call.camera.isEnabled.collectAsState() val isMicrophoneEnabled by call.microphone.isEnabled.collectAsState() CallContent( modifier = Modifier.background(color = Color.White), call = call, onBackPressed = { onBackPressed() }, controlsContent = { ControlActions( call = call, actions = listOf( { ToggleCameraAction( modifier = Modifier.size(52.dp), isCameraEnabled = isCameraEnabled, onCallAction = { call.camera.setEnabled(it.isEnabled) } ) }, { ToggleMicrophoneAction( modifier = Modifier.size(52.dp), isMicrophoneEnabled = isMicrophoneEnabled, onCallAction = { call.microphone.setEnabled(it.isEnabled) } ) }, { FlipCameraAction( modifier = Modifier.size(52.dp), onCallAction = { call.camera.flip() } ) }, ) ) } ) }
Stream's Video SDK provides fully polished UI components, allowing you to build a video call quickly and customize them. As you've seen before, you can implement a complete video call screen with CallContent
that is composable in Jetpack Compose. The CallContent
composable consists of three major parts:
appBarContent
: Content that calls information or additional actions show.controlsContent
: Users can trigger different actions to control a joined call.videoContent
: Content rendered when connected to a call successfully.
Theming gives you control over the colors and fonts.
12345678910111213val colors = StreamColors.defaultColors().copy(brandPrimary = Color.Black) val dimens = StreamDimens.defaultDimens().copy(componentHeightM = 52.dp) val typography = StreamTypography.defaultTypography(colors, dimens).copy(titleL = TextStyle()) val shapes = StreamShapes.defaultShapes(dimens).copy(button = CircleShape) VideoTheme( colors = colors, dimens = dimens, typography = typography, shapes = shapes, ) { // .. }
Recap
To recap what we've learned in the Android video calling tutorial:
- You set up a call: (
val call = client.call("default", "123")
). - The call type (
"default"
in the above case) controls which features are enabled and how permissions are set. - When you join a call, real-time communication is set up for audio & video calling: (
call.join()
) StateFlow
objects incall.state
andcall.state.participants
make it easy to build your UI.VideoRenderer
is the low-level component that renders video.
We've used Stream's Video Calling API in this tutorial, which means calls run on a global edge network of video servers. Being closer to your users improves the latency and reliability of calls. The Compose SDK enables you to build in-app video calling, audio rooms, and live streaming in days.
We hope you've enjoyed this tutorial. Please feel free to contact us if you have any suggestions or questions.
Samples
If you're interested in learning more use cases of Video SDK with codes, check out the GitHub repositories below:
- Android Video Chat: Android Video Chat demonstrates a real-time video chat application by utilizing Stream Chat & Video SDKs.
- Android Video Samples: Provides a collection of samples that utilize modern Android tech stacks and Stream Video SDK for Kotlin and Compose.
- WhatsApp Clone Compose: The WhatsApp clone project demonstrates modern Android development built with Jetpack Compose and Stream Chat/Video SDK for Compose.
- Twitch Clone Compose: TheTwitch clone project demonstrates modern Android development built with Jetpack Compose and Stream Chat/Video SDK for Compose.
- Meeting Room Compose: A real-time meeting room app built with Jetpack Compose to demonstrate video communications.
- Audio Only Demo: A sample implementation of an audio-only caller application with Android Video SDK.
Final Thoughts
In this video app tutorial, we built a fully functioning Android messaging app with our Android SDK component library. We also showed how easy it is to customize the behavior and the style of the Android video app components with minimal code changes.
Both the video SDK for Android and the API have plenty more features available to support more advanced use cases.