import io.getstream.video.android.compose.ui.components.video.VideoRenderer
@Composable
fun CustomVideoComponent(
call: Call,
participant: ParticipantState
) {
// step 1 - observe video track from a participant.
val videoTrack by participant.videoTrack.collectAsState()
if (videoTrack != null) { // step 2 - check whether the track is null or not.
VideoRenderer(
modifier = Modifier.fillMaxSize(),
call = call,
video = videoTrack, // step 3 - pass the video track to VideoRenderer to render the video
sessionId = participant.sessionId,
trackType = TrackType.TRACK_TYPE_VIDEO
)
} else {
// show a custom fallback for an unavailable video track
}
}
Video Renderer
One of the primary low-level components we provide is the VideoRenderer
. It’s a simple component built in Jetpack Compose, which utilizes VideoTextureViewRenderer under the hood. It supports rendering of both a user camera track and screen sharing track.
Let’s see how to use the component in your UI.
Rendering a Single Video
To render a single video track on your layout, you can use the VideoRenderer
composable function like this:
There are a few steps going on here:
- Using the
ParticipantState
, you can get that participant’svideoTrack
. It also contains thesessionId
of that participant, which is a unique UUID value, used to connect the tracks and rendering. - When you have the track and it’s not
null
, you’re ready to show the UI usingVideoRenderer
and its parameters. - Using
modifier
, you can customize the size, shape, elevation and similar properties of the component UI.
This snippet of code will render a single video track from a call.
Choosing the VideoTrack
The video
parameter in the VideoRenderer
is used to render the Video UI. To provide the video track, you can use the ParticipantState
. Within it, we store an instance of the VideoTrack
class from the WebRTC library and we expose two possible tracks:
videoTrack
: Represents the Video of the participant, from their camera feed.screenSharingTrack
: Represents the screen sharing track of the participant, based on what screen or window they’re sharing.
You can always determine if a person is screen sharing by checking the ParticipantState.screenSharingTrack
property as well as if the videoTrack
or screenSharingTrack
are null
or valid.
VideoRenderer Lifecycle
To ensure that the VideoRenderer
component works correctly, you need to handle the component lifecycle. Specifically, you need to start and stop the video track when the component is added to and removed from the screen.
Fortunately enough, we provide this for you, out of the box. While tracks are persisted within the ParticipantState
, the WebRTC subscription is managed under the hood.
When the composable function is called within the UI, it’s rendered and connected to the VideoTextureViewRenderer under the hood. When the state changes and it’s removed from the UI, the renderer is disposed of and the state is cleaned up.