Did you know? All Video & Audio API plans include a $100 free usage credit each month so you can build and test risk-free. View Plans ->

Android Video Calling Tutorial

The following tutorial shows you how to quickly build a Video Calling app leveraging Stream's Video API and the Stream Video Android components. The underlying API is very flexible and allows you to build nearly any type of video experience.

example of android video and audio sdk

This tutorial teaches you how to build a Zoom/Whatsapp-style video calling app.

  • Calls run on Stream's global edge network for optimal latency & reliability.
  • Permissions give you fine-grained control over who can do what.
  • Video quality and codecs are automatically optimized.
  • Powered by Stream's Video Calling API.
  • UI components are fully customizable, as demonstrated in the Android Video Cookbook.

Step 1 - Create a New Project in Android Studio

  1. Create a New Project.
  2. Select Phone & Tablet -> Empty Activity.
  3. Name your project VideoCall.

⚠️ Note: This tutorial's sample project uses Android Studio Ladybug. The setup steps can vary slightly across Android Studio versions. We recommend using Android Studio Ladybug or newer.

Step 2 - Install the SDK & Setup the Client

The Stream Video SDK has two main artifacts:

  • Core Client: io.getstream:stream-video-android-core: Includes only the core part of the SDK.
  • Compose UI Components: io.getstream:stream-video-android-ui-compose: Includes the core + Compose UI components.

For this tutorial, we'll use the Compose UI Components.

Add the Video Compose SDK dependency to the app/build.gradle.kts file. If you're new to Android, note that there are 2 build.gradle.kts files, you want to open the one located in the app folder.

kotlin
1
2
3
4
5
6
dependencies { // Stream Video Compose SDK implementation("io.getstream:stream-video-android-ui-compose:<latest_version>") // ... }

⚠️ Replace <latest-version> with the version number indicated below. Also, you can check the Releases page.

⚠️ Make sure compileSdk or compileSdkVersion (if you're using an older syntax) is set to 35 or newer in your app/build.gradle.kts file.

kotlin
1
2
3
4
5
android { // ... compileSdk = 35 // ... }

⚠️ Add the INTERNET permission in the AndroidManifest.xml file, before the application tag:

xml
1
<uses-permission android:name="android.permission.INTERNET" />

⚠️ If you get Compose-related errors when building your project, expand the section below and follow the steps.

⚠️ Make sure you sync the project after doing these changes. Click on the Sync Now button above the file contents.

Step 3 - Create & Join a call

To keep this tutorial short and easy to understand, we'll place all the code in MainActivity.kt. For a production app, you'd want to initialize the client in your Application class or DI module and use a View Model.

Open MainActivity.kt and replace the MainActivity class with the code below. You can delete the other functions that Android Studio created.

Also, expand the section below to view the import statements used throughout this tutorial. To follow along easily, replace the existing import statements in your file.

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
class MainActivity : ComponentActivity() { override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) val apiKey = "REPLACE_WITH_API_KEY" val userToken = "REPLACE_WITH_TOKEN" val userId = "REPLACE_WITH_USER_ID" val callId = "REPLACE_WITH_CALL_ID" // Create a user. val user = User( id = userId, // any string name = "Tutorial", // name and image are used in the UI image = "https://bit.ly/2TIt8NR", ) // Initialize StreamVideo. For a production app, we recommend adding the client to your Application class or di module. val client = StreamVideoBuilder( context = applicationContext, apiKey = apiKey, geo = GEO.GlobalEdgeNetwork, user = user, token = userToken, ).build() setContent { // Request permissions and join a call, which type is `default` and id is `123`. val call = client.call(type = "default", id = callId) LaunchCallPermissions( call = call, onAllPermissionsGranted = { // All permissions are granted so that we can join the call. val result = call.join(create = true) result.onError { Toast.makeText(applicationContext, it.message, Toast.LENGTH_LONG).show() } } ) // Apply VideoTheme VideoTheme { // Define required properties. val participants by call.state.participants.collectAsState() val connection by call.state.connection.collectAsState() // Render local and remote videos. Box( contentAlignment = Alignment.Center, modifier = Modifier.fillMaxSize() ) { if (connection != RealtimeConnection.Connected) { Text("Loading...", fontSize = 30.sp) } else { Text("Call ${call.id} has ${participants.size} participants", fontSize = 30.sp) } } } } } }

We need a valid user token typically generated by your server-side API for a production app to run this sample code. When a user logs in to your app, you return the user token that gives them access to the call. As you can see in the code snippet, we've generated a user token for you to make this tutorial easier to follow.

When you run the sample app, it will connect successfully. The text will say, "Call <ID> has 1 participant" (yourself).

Let's review what we did in the code above.

Create a user First, we create a user instance. You typically sync these users via a server-side integration on your backend. Alternatively, you can also use guest or anonymous users.

kotlin
1
2
3
4
5
val user = User( id = userId, // any string name = "Tutorial", // name and image are used in the UI image = "https://bit.ly/2TIt8NR", )

Initialize the Stream Video Client. Next, we initialize the video client by passing the API Key, user, and user token.

kotlin
1
2
3
4
5
6
7
val client = StreamVideoBuilder( context = applicationContext, apiKey = apiKey, geo = GEO.GlobalEdgeNetwork, user = user, token = userToken, ).build()

Create a Call. After the user and client are created, we create a call.

kotlin
1
val call = client.call("default", callId)

Request Runtime Permissions. Before joining the call, we request a camera and microphone runtime permissions to capture video and audio.

kotlin
1
2
3
4
5
6
LaunchCallPermissions( call = call, onAllPermissionsGranted = { // ... } )

Review the permissions docs to learn more about how you can easily request permissions.

Join a Call. We join a call in the onAllPermissionsGranted block.

kotlin
1
2
3
4
5
6
7
8
9
10
LaunchCallPermissions( call = call, onAllPermissionsGranted = { // All permissions are granted so we can join the call. val result = call.join(create = true) result.onError { Toast.makeText(applicationContext, it.message, Toast.LENGTH_LONG).show() } } )

Video and audio connections will be set up when you use call.join().

Define the UI. Lastly, you can render the UI by observing call.state (participants and connection states).

kotlin
1
2
val participants by call.state.participants.collectAsState() val connection by call.state.connection.collectAsState()

You'll find all relevant states for a call in call.state and call.state.participants. The documentation on Call state and Participant state explains this in further detail.

Step 4 - Joining from the Web

Let's join the call from your browser to make this a little more interactive.

For testing you can join the call on our web-app:

On your Android device, you'll see the text update to 2 participants. Let's keep the browser tab open as you go through the tutorial.

Step 5 - Rendering Video

In this next step, we're going to render the local & remote participant video feeds.

In the MainActivity.kt file, replace the code inside VideoTheme with the example below:

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
VideoTheme { val remoteParticipants by call.state.remoteParticipants.collectAsState() val remoteParticipant = remoteParticipants.firstOrNull() val me by call.state.me.collectAsState() val connection by call.state.connection.collectAsState() var parentSize: IntSize by remember { mutableStateOf(IntSize(0, 0)) } Box( contentAlignment = Alignment.Center, modifier = Modifier .fillMaxSize() .background(VideoTheme.colors.baseSenary) .onSizeChanged { parentSize = it } ) { if (remoteParticipant != null) { ParticipantVideo( modifier = Modifier.fillMaxSize(), call = call, participant = remoteParticipant ) } else { if (connection != RealtimeConnection.Connected) { Text( text = "waiting for a remote participant...", fontSize = 30.sp, color = VideoTheme.colors.basePrimary ) } else { Text( modifier = Modifier.padding(30.dp), text = "Join call ${call.id} in your browser to see the video here", fontSize = 30.sp, color = VideoTheme.colors.basePrimary, textAlign = TextAlign.Center ) } } // floating video UI for the local video participant me?.let { localVideo -> FloatingParticipantVideo( modifier = Modifier.align(Alignment.TopEnd), call = call, participant = localVideo, parentBounds = parentSize ) } } }

When you run the app, you'll see your local video in a floating video element and the video from your browser. The result should look somewhat like this:

Let's review the changes we made.

ParticipantVideo renders a participant based on ParticipantState in a call. If the participant's track is not null and is correctly published, it renders the participant's video or a user avatar if no video is to be shown. If you want to use a lower-level component, you can see the VideoRenderer as well.

kotlin
1
2
3
4
5
VideoRenderer( modifier = Modifier.weight(1f), call = call, video = remoteVideo )

It only displays the video and doesn't add any other UI elements. The video is lazily loaded and only requested from the video infrastructure if you display it. So, if you have a video call with 200 participants and show only 10 of them, you'll only receive video for 10 participants. This is how software like Zoom and Google Meet make large calls work.

FloatingParticipantVideo renders a draggable display of your own video.

kotlin
1
2
3
4
5
6
FloatingParticipantVideo( modifier = Modifier.align(Alignment.TopEnd), call = call, participant = localVideo, parentBounds = parentSize )

Step 6 - Render a Full Video Calling UI

The above example showed how to use the call state object and Compose to build a basic video UI. For a production version of calling you'd want a few more UI elements:

  • Indicators of when someone is speaking.
  • Quality of their network.
  • Layout support for >2 participants.
  • Labels for the participant names.
  • Call header and controls.

Stream ships with several Compose components to make this easy. You can customize the components by theming, arguments, and swapping parts. This is convenient if you want to quickly build a production-ready calling experience for your app. (And if you need more flexibility, many customers use the above low-level approach to build a UI from scratch.)

To render a complete calling UI, we'll leverage the CallContent component. This includes sensible defaults for a call header, video grid, call controls, picture-in-picture, and everything you need to build a video call screen.

Open MainActivity.kt, and update the code inside VideoTheme to use the CallContent. The code will be a lot smaller since all UI logic is handled in the CallContent:

kotlin
1
2
3
4
5
6
7
VideoTheme { CallContent( modifier = Modifier.fillMaxSize(), call = call, onBackPressed = { onBackPressed() }, ) }

The result will be:

You'll see a more polished video UI when you run your app. It supports reactions, screen sharing, active speaker detection, network quality indicators, etc. The most commonly used UI components are:

  • VideoRenderer: For rendering video and automatically requesting video tracks when needed. Most of the Video components are built on top of this.
  • ParticipantVideo: The participant's video + some UI elements for network quality, reactions, speaking etc.
  • ParticipantsGrid: A grid of participant video elements.
  • FloatingParticipantVideo: A draggable version of the participant video. Typically used for your own video.
  • ControlActions: A set of buttons for controlling your call, such as changing audio and video states.
  • RingingCallContent: UI for displaying incoming and outgoing calls.

The full list of UI components is available in the docs.

Step 7 - Customizing the UI

You can customize the UI by:

  • Building your UI components (the most flexible, build anything).
  • Mixing and matching with Stream's UI Components (speeds up how quickly you can build common video UIs).
  • Theming (basic customization of colors, fonts, etc.).

The example below shows how to swap out the call controls for a custom implementation:

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
VideoTheme { val isCameraEnabled by call.camera.isEnabled.collectAsState() val isMicrophoneEnabled by call.microphone.isEnabled.collectAsState() CallContent( modifier = Modifier.background(color = Color.White), call = call, onBackPressed = { onBackPressed() }, controlsContent = { ControlActions( call = call, actions = listOf( { ToggleCameraAction( modifier = Modifier.size(52.dp), isCameraEnabled = isCameraEnabled, onCallAction = { call.camera.setEnabled(it.isEnabled) } ) }, { ToggleMicrophoneAction( modifier = Modifier.size(52.dp), isMicrophoneEnabled = isMicrophoneEnabled, onCallAction = { call.microphone.setEnabled(it.isEnabled) } ) }, { FlipCameraAction( modifier = Modifier.size(52.dp), onCallAction = { call.camera.flip() } ) }, ) ) } ) }

Stream's Video SDK provides fully polished UI components, allowing you to build a video call quickly and customize them. As you've seen before, you can implement a complete video call screen with CallContent that is composable in Jetpack Compose. The CallContent composable consists of three major parts:

  • appBarContent: Content that calls information or additional actions show.
  • controlsContent: Users can trigger different actions to control a joined call.
  • videoContent: Content rendered when connected to a call successfully.

Theming gives you control over the colors and fonts.

kotlin
1
2
3
4
5
6
7
8
9
10
11
12
13
val colors = StreamColors.defaultColors().copy(brandPrimary = Color.Black) val dimens = StreamDimens.defaultDimens().copy(componentHeightM = 52.dp) val typography = StreamTypography.defaultTypography(colors, dimens).copy(titleL = TextStyle()) val shapes = StreamShapes.defaultShapes(dimens).copy(button = CircleShape) VideoTheme( colors = colors, dimens = dimens, typography = typography, shapes = shapes, ) { // .. }

Recap

To recap what we've learned in the Android video calling tutorial:

  • You set up a call: (val call = client.call("default", "123")).
  • The call type ("default" in the above case) controls which features are enabled and how permissions are set.
  • When you join a call, real-time communication is set up for audio & video calling: (call.join())
  • StateFlow objects in call.state and call.state.participants make it easy to build your UI.
  • VideoRenderer is the low-level component that renders video.

We've used Stream's Video Calling API in this tutorial, which means calls run on a global edge network of video servers. Being closer to your users improves the latency and reliability of calls. The Compose SDK enables you to build in-app video calling, audio rooms, and live streaming in days.

We hope you've enjoyed this tutorial. Please feel free to contact us if you have any suggestions or questions.

Samples

If you're interested in learning more use cases of Video SDK with codes, check out the GitHub repositories below:

  • Android Video Chat: Android Video Chat demonstrates a real-time video chat application by utilizing Stream Chat & Video SDKs.
  • Android Video Samples: Provides a collection of samples that utilize modern Android tech stacks and Stream Video SDK for Kotlin and Compose.
  • WhatsApp Clone Compose: The WhatsApp clone project demonstrates modern Android development built with Jetpack Compose and Stream Chat/Video SDK for Compose.
  • Twitch Clone Compose: TheTwitch clone project demonstrates modern Android development built with Jetpack Compose and Stream Chat/Video SDK for Compose.
  • Meeting Room Compose: A real-time meeting room app built with Jetpack Compose to demonstrate video communications.
  • Audio Only Demo: A sample implementation of an audio-only caller application with Android Video SDK.

Final Thoughts

In this video app tutorial, we built a fully functioning Android messaging app with our Android SDK component library. We also showed how easy it is to customize the behavior and the style of the Android video app components with minimal code changes.

Both the video SDK for Android and the API have plenty more features available to support more advanced use cases.

Give us feedback!

Did you find this tutorial helpful in getting you up and running with your project? Either good or bad, we're looking for your honest feedback so we can improve.

Start coding for free

No credit card required.
If you're interested in a custom plan or have any questions, please contact us.