Video Renderer

One of the primary low-level components we provide is the VideoRenderer. It's a simple component built in Jetpack Compose, which utilizes VideoTextureViewRenderer under the hood. It supports rendering of both a user camera track and screen sharing track.

Let's see how to use the component in your UI.

Rendering a Single Video

To render a single video track on your layout, you can use the VideoRenderer composable function like this:

import androidx.compose.runtime.Composable
import androidx.compose.runtime.collectAsState
import androidx.compose.runtime.getValue
import androidx.compose.ui.Modifier
import io.getstream.video.android.compose.ui.components.video.VideoRenderer
import io.getstream.video.android.core.Call
import io.getstream.video.android.core.ParticipantState

@Composable
fun CustomVideoComponent(
    call: Call,
    participant: ParticipantState,
) {
    // step 1 - observe video track from a participant.
    val video by participant.video.collectAsState()

    if (video != null) { // step 2 - check whether the video is null or not.
        VideoRenderer(
            modifier = Modifier.fillMaxSize(),
            call = call,
            video = video, // step 3 - pass the video to VideoRenderer to render the video
        )
    } else {
        // show a custom fallback for an unavailable video track
    }
}

There are a few steps going on here:

  1. Using the ParticipantState, you can observe the participant's video state, which contains the video track information.
  2. When you have the video and it's not null, you're ready to show the UI using VideoRenderer and its parameters.
  3. Using modifier, you can customize the size, shape, elevation and similar properties of the component UI.

This snippet of code will render a single video track from a call.

Choosing the Video Track

The video parameter in the VideoRenderer is used to render the Video UI. To provide the video, you can use the ParticipantState. Within it, we store video state information that wraps the underlying video track from the WebRTC library. The participant state exposes two observable video states:

  • video: Represents the video from the participant's camera feed.
  • screenSharing: Represents the screen sharing video of the participant, based on what screen or window they're sharing.

You can always determine if a person is screen sharing by observing the ParticipantState.screenSharing property and checking if it's null or contains a valid video track.

VideoRenderer Lifecycle

To ensure that the VideoRenderer component works correctly, you need to handle the component lifecycle. Specifically, you need to start and stop the video track when the component is added to and removed from the screen.

Fortunately enough, we provide this for you, out of the box. While tracks are persisted within the ParticipantState, the WebRTC subscription is managed under the hood.

When the composable function is called within the UI, it's rendered and connected to the VideoTextureViewRenderer under the hood. When the state changes and it's removed from the UI, the renderer is disposed of and the state is cleaned up.