WebRTC For The Brave

WebRTC Video Streaming with Jetpack Compose

WebRTC on Android

We’ve broken down our Introduction to WebRTC for Kotlin and Android Developers into a multi-part series, each covering essential concepts of WebRTC and how to use WebRTC in your Android project with Jetpack Compose.

In the last lesson, we covered essential concepts of WebRTC and how to establish a peer-to-peer connection between clients and build your Android video application with WebRTC.

In this part, you’ll learn how to render real-time video communication with WebRTC in the Jetpack Compose project.

Let’s get started!

WebRTC in Jetpack Compose

Since Google announced Jetpack Compose 1.0 stable, the Jetpack Compose ecosystem keeps growing up, and many companies have started to adapt it in their production-level projects.

However, WebRTC doesn’t support Jetpack Compose directly on the UI side, such as video stream renderers, so we must figure out how to render WebRTC video tracks in Jetpack Compose projects.

Firstly, let’s dive into how to render video streams on Android using traditional Android UI components.

VideoTextureViewRenderer

You should use a particular UI component to render stream content, such as a video or an OpenGL scene from a video, camera preview, real-time network, or something.

Typically, you can use SurfaceViewRenderer to display real-time video streams on a layout which is plane structures or simple.

However, if you want to implement complicated layouts, such as one video track overlays another, you should figure out different ways. Let's suppose you should implement a complex video call screen, such as one video call layout should overlay another video call layout like the image below:

In that case, the SurfaceViewRenderer doesn’t work as expected. There are two reasons that the SurfaceViewRenderer is not working correctly:

  1. SurfaceViewRenderer is embedded inside a view hierarchy: SurfaceView lives on its plane, which means it essentially punches a hole in its window to display the content directly on the screen. Also, the surface is Z-ordered, so when you overlay multiple SurfaceViews, they can destroy each other, and Z-ordering may not work as expected.

  2. Lifecycle problem of SurfaceViewRenderer: SurfaceViewRenderer extends SurfaceView, and this means that when a SurfaceView is made invisible, the underlying surface is destroyed. Depending on your use cases, you will face a SurfaceView going blank or black as its surface has been destroyed.

So we need to figure out an alternative way - TextureView. Unlike SurfaceView, TextureView doesn’t create a separate window but instead behaves as a regular view. It also allows translucency, arbitrary rotations, and complex clipping.

There is one more reason to use TextureView is because the SurfaceView rendering wasn’t properly synchronized with view animations until API 24. So if you need to implement complex animation, such as dragging a video layout or scrolling, you must use the TextureView.

Note: If you want to learn more about the difference between SurfaceView and TextureView, check out Android HDR | Migrating from TextureView to SurfaceView (Part #1) — How to Migrate.

The WebRTC in Jetpack Compose project implemented a custom view called VideoTextureViewRenderer, which extends TextureView, VideoSink, and SurfaceTextureListener like the code below:

Note: If you use the webrtc-android open-source library’s WebRTC for UI Components package, you can use the VideoTextureViewRenderer UI component quickly without additional implementation.

Now, let’s dive into how to render video streams in Jetpack Compose.

VideoRenderer

As we discussed before, WebRTC doesn’t support Jetpack Compose directly on the UI side, and Jetpack Compose doesn’t have particular UI components that allow interop between multiple renderers: the UI toolkit and a video player or an OpenGL/Vulkan rendering engine. So we still need to utilize SurfaceView in Jetpack Compose to render video streams.

Hopefully, Jetpack Compose supports interoperability APIs that allow you to use traditional Android View and include an Android View hierarchy in a Compose UI.

With AndroidView composable, you can add the traditional Android View inside a Compose UI. You can create a new composable function called VideoRenderer that renders video streams using the AndroidView composable and VideoTextureViewRenderer as you've seen in the code below:

Let’s break the functions above one by one:

  • VideoRenderer: The VideoRenderer composable receives a VideoTrack and create the VideoTextureViewRenderer with the AndroidView composable. This function cleans up VideoTextureViewRenderer, which extends VideoSink when it leaves the composition using DisposableEffect.

  • cleanTrack: This function cleans up the given video track with a given VideoTextureViewRenderer; this means the given VideoSink(VideoTextureViewRenderer) is no longer used by and totally removed from the VideoTrack.

  • setupVideo: Add the given VideoTextureViewRenderer to the VideoTrack and ready for rendering video streams.

Now, you can render video streams with the VideoRenderer composable. Let’s create a video call screen with the VideoRenderer composable to communicate peer-to-peer connection.

If you want to recap the essential concepts of WebRTC, check out What is WebRTC in the previous lesson.

VideoCallScreen

The video call screen consists of the main three parts like the image below:

So the video call screen is responsible for the following four parts: initializing WebRtcSessionManager, rendering a remote video track, rendering a local video track, and controlling video call behaviors with a video controller.

Let’s break each part one by one!

Initializing WebRtcSessionManager

Firstly, you should initialize the WebRtcSessionManager with LaunchedEffect like the code below:

So the WebRtcSessionManager is ready to render the local/remote video tracks from the peer-to-peer connection and send an answer or offer depending on the case.

Rendering A Remote Video Track

Next, you can easily render the remote video track with the VideoRenderer composable easily like the code below:

As you can see in the code above, you can observe a remote video track from the WebRtcSessionManager, and render the video track using the VideoRenderer composable.

When you build the project on multiple devices, you’ll see each device’s remote call screens in real-time. Now, let’s add a local video track over the remote video track in Jetpack Compose.

Rendering A Local Video Track

Typically, people should be available to simultaneously see video screens about a caller and callee in 1:1 video calls. You can easily implement it by adding a new VideoRenderer composable, which renders a local video track. In this article, you’ll implement a floating video renderer, which is movable over the remote video track like the image below:

The basic concept of the floating video renderer is that the local video track is movable on the top of the remote video track by user actions. It also must not be placed outside the remote video track and cut out by the screen. So we can implement FloatingVideoRenderer using pointerInput and detetDragGestures Compose functions like the code below:

Next, you can update the VideoCallScreen with the FloatingVideoRenderer composable like the code below:

You’ve implemented all the real-time rendering screens for video calls. Now, let’s implement a video controller, which controls audio and camera settings in a video call.

Video Controller

A video controller is a typical feature that allows people to control the video call settings, such as turning on/off audio and camera, flipping the camera, or something else. In WebRTC-in-Jetpack-Compose, the video controller includes the following four features: flipping a camera, toggling a camera, toggling a microphone, and leaving a call.

Flipping a Camera

You can flip a local camera easily in the WebRtcSessionManagerImpl class using Camera2Capturer’s switchCamera function like the code below:

The switchCamera function switches an internal camera’s orientation depending on the current camera status: From front-facing to rear-facing or opposite direction.

Toggling a Camera

You can enable or disable the camera easily with VIdeoCapturer like the code below:

In the WebRtcSessionManagerImpl class, you’ve already calculated the resolution, so you can start capturing a camera with the startCapture function by giving the resolution. You can also stop a local camera easily by invoking the stopCapture function.

Toggling a Microphone

You’ve already used AudioManager to listen to audio sounds in a real-time peer-to-peer connection. So you can easily toggle a microphone with the setMicrophoneMute function like the code below:

Leaving a Call

When people want to leave a call, disconnect the web socket to the signaling server and dispose of all resources, such as local/remote video tracks, an audio track, and local cameras, like the disconnect function below:

Lastly, you can put together all of the controlling methods above into the VideoCallScreen composable below:

After running the project on multiple Android devices, you will finally see the result below:

Conclusion

In this lesson, you've gained a more comprehensive understanding of how WebRTC functions, including essential concepts like peer-to-peer communication, the signaling server, SDP (Session Description Protocol), and ICE (Interactive Connectivity Establishment). Additionally, you've learned how to implement real-time video streaming on Android using Jetpack Compose. For further exploration, you can access the complete source code repository on GitHub.