dependencies {
// Stream Video Android Compose UI
implementation("io.getstream:stream-video-android-ui-compose:x.x.x")
}
Quickstart
This guide will help you quickly integrate the Stream Video Android SDK with our
library of built-in UI components via Jetpack Compose into your application using
the high-level CallContent
component with Picture-in-Picture (PIP) support.
Prerequisites
- Android SDK 24+ (Android 7.0)
- If you want to use Stream’s built-in UI component library ensure that Jetpack Compose is already added in your app
Add Stream library dependencies to your app-level build.gradle.kts
If you want to use Stream’s built-in UI component library, use the following dependency in your app
Replace x.x.x
with the latest version of our library which can be found here.
In case you want to have your own UI and only want to use our core sdk, the dependency to be added is the following
Replace x.x.x
with the latest version of our library which can be found here.
dependencies {
// Stream Video Android Compose UI
implementation("io.getstream:stream-video-android-core:x.x.x")
}
Instantiating the SDK and joining a call
Step 1 : Get your credentials
The first step is to get your credentials with which you can initialize the SDK and join a call. For production case, make sure to get these credentials from dashboard but for testing you can these values here
apiKey
: Your Stream API key from the dashboard/playgrounduserToken
: JWT token for authentication. For production, it is recommended to generate using your backend.userId
: A unique string identifier to identify the user joining the call.callId
: An optional argument. It’s convenient to specify an ID if the call is associated with an object in your database. For example, if you’re building a ride-sharing app like Uber, you could use the ride ID as the call ID to easily align with your internal systems.
// Step 1: Replace with your Stream credentials
// Get these from: https://getstream.io/video/docs/android/playground/demo-credentials/
val apiKey = "YOUR_API_KEY"
val userToken = "YOUR_USER_TOKEN"
val userId = "YOUR_USER_ID"
val callId = "YOUR_CALL_ID"
Step 2 : Creating a User
Now that you have these credentials, you need to create an instance of User
like the following, where the role
corresponds to one of the roles defined in your app in the dashboard
// Step 2: Create a user
val user = User(
id = userId,
name = "Your Name", // Display name in the UI
image = "https://your-image-url.com/avatar.jpg", // Optional avatar
role = "admin", // User role for the call
)
Step 3 : Creating a VideoClient
The next step is to Create the client
using StreamVideoBuilder
which initializes the SDK.
For this you would need the following things:
context
: Typically yourapplication
context should be passed heregeo
: Choose the closest region for better performance:GEO.GlobalEdgeNetwork
- Global edge network (recommended)GEO.US
- US regionGEO.EU
- European regionGEO.Asia
- Asian region
apiKey
: Your Stream API key from the dashboarduser
: User information for the calluserToken
: JWT token for authentication
// Step 3: Initialize StreamVideo client
val client = StreamVideoBuilder(
context = applicationContext,
apiKey = apiKey,
geo = GEO.GlobalEdgeNetwork, // Choose appropriate geo region
user = user,
token = userToken,
).build()
Step 4 : Getting the Call instance and joining it
Now the only step remaining to join the call, is to create a call by calling the client
’s call
method which which will either retrieve an existing Call instance or create a new one if it does not already exist
Note that the join
method is a suspend
method so it needs to be called from either a coroutine scope or a suspend method in Kotlin
// Step 4: Request permissions and join the call
val call = client.call(type = "default", id = callId)
LaunchCallPermissions(call = call) {
call.join(create = true) // Creates call if it doesn't exist
}
Here default
is The call type. There are 4 built-in call types and you can also create your own. The call type controls permissions and which features are enabled.
Step 5 : Rendering video on UI
Now that we have joined the call, the only thing remaining is to show the Call content on the UI including your local video and the videos from other participants in the call.
For this we have a CallContent
composable that can be used directly in your ComponentActivity
setContent {
// Step 4: Request permissions and join the call
val call = client.call(type = "default", id = callId)
LaunchCallPermissions(call = call) {
call.join(create = true) // Creates call if it doesn't exist
}
// Step 5: Apply VideoTheme and render the call UI
VideoTheme {
CallContent(
modifier = Modifier.fillMaxSize(),
call = call,
enableInPictureInPicture = true, // Enable PIP support
onBackPressed = { finish() }, // Handle back button
onCallAction = { callAction -> // Handles call control actions (mute, camera flip, etc.)
// Handle call actions
when (callAction) {
is FlipCamera -> call.camera.flip() // Switch between front/back camera
is ToggleCamera -> call.camera.setEnabled(callAction.isEnabled) // Enable/disable camera
is ToggleMicrophone -> call.microphone.setEnabled(callAction.isEnabled) // Mute/unmute microphone
is LeaveCall -> finish() // End the call
else -> Unit
}
},
)
}
}
Advanced Features
Customization Options
- Custom UI: Use low-level components like
ParticipantVideo
andFloatingParticipantVideo
from our UI component libraries - Theming: Customize
VideoTheme
colors and styles - Call Types: Use different call types for different use cases
- Advanced Features: Add screen sharing, recording, etc.
UI Components
The goal of the stream-video-android-ui-Compose
library is to make it easy to build any type of video/calling experience. You have several options for the UI:
- Build your own UI components using the state as demonstrated above
- Use our library of built-in components
- Mix and match between your own and built-in components
You can customize the built-in components using theming and modifiers. Compose is flexible, but there are limits, so if you get stuck with the built-in components, you can always work around it by building your own.
Rendering Video
The call’s state is available in call.state
and you’ll often work with call.state.participants
.
Here’s a basic Compose example of how to render the video of all participants:
val participants by call.state.participants.collectAsState()
participants.forEach {
val videoTrack = it.videoTrack // contains the video track
val userName = it.userNameOrId // the user name
..
}
As shown in the example above, participants (ParticipantState)
contains all essential information to render videos, such as audio/video tracks, user information, audio/video enabled status, etc. You can simply render the video track with our Compose components like the sample below:
ParticipantVideo(
modifier = Modifier
.fillMaxSize()
.clip(RoundedCornerShape(16.dp)),
call = call,
participant = participant
)
The fields available on the participants are documented in the participant state documentation.
Camera & Audio
Most video apps will show buttons to mute/unmute the audio or video and flip the camera. The example below shows how to use the camera:
val call = client.call("default", "123")
val camera = call.camera
camera.enable()
camera.disable()
camera.flip()
Here’s how to enable the microphone or control the speaker volume:
call.microphone.enable()
call.microphone.disable()
call.speaker.setVolume(100)
call.speaker.setVolume(0)
call.speaker.enableSpeakerPhone()
Incoming/Outgoing Calls
If you intend to support incoming and outgoing calls, the SDK must know which activities to call in the notification PendingIntent
.
To accept and send incoming calls via the default notification handler provided by the SDK, you need to handle the intent actions in your manifest.
The SDK defines the following actions:
ACTION_NOTIFICATION = "io.getstream.video.android.action.NOTIFICATION"
ACTION_LIVE_CALL = "io.getstream.video.android.action.LIVE_CALL"
ACTION_INCOMING_CALL = "io.getstream.video.android.action.INCOMING_CALL"
ACTION_OUTGOING_CALL = "io.getstream.video.android.action.OUTGOING_CALL"
ACTION_ACCEPT_CALL = "io.getstream.video.android.action.ACCEPT_CALL"
ACTION_REJECT_CALL = "io.getstream.video.android.action.REJECT_CALL"
ACTION_LEAVE_CALL = "io.getstream.video.android.action.LEAVE_CALL"
ACTION_ONGOING_CALL = "io.getstream.video.android.action.ONGOING_CALL"
If you do not support incoming and outgoing calls, you can skip the <intent-filter>
declarations.
To fully utilize the incoming/outgoing feature, the SDK needs to know which activity these actions resolve to in order to construct the PendingIntent
s.
You must provide this information in your manifest.
The ACTION_REJECT_CALL
and ACTION_LEAVE_CALL
are handled by default by the SDK and you do not need to do anything about them.
The ACTION_ONGOING_CALL
does not mandate an <intent-filter>
, but omitting this will result in reduced functionality where the user will not be returned to your app if the notification is clicked.
All other actions must be declared in your manifest, otherwise the internal CallService
will fail to create the required notification for a foreground service and thus not start, resulting in an exception.
<manifest>
<application>
<activity
android:name=".Activity"
android:exported="false"
android:showOnLockScreen="true"
android:showWhenLocked="true"
>
<intent-filter android:priority="1">
<action android:name="io.getstream.video.android.action.INCOMING_CALL" />
<action android:name="io.getstream.video.android.action.ONGOING_CALL" />
<action android:name="io.getstream.video.android.action.ACCEPT_CALL" />
<action android:name="io.getstream.video.android.action.OUTGOING_CALL" />
<action android:name="io.getstream.video.android.action.NOTIFICATION" />
</intent-filter>
</activity>
</application>
</manifest>
Note
You can handle multiple
IntentFilter
within a singleactivity
if you prefer, or have separate activities for each action.
For more details on notification customization, see our Push Notification Guide.