Skip to main content

VideoRenderer

Each participant in the video call has their own video view (card), that you can either customize or completely swap it with your custom implementation.

If you are not using our SwiftUI SDK and the ViewFactory for customizations, you can still re-use our low-level components to build your own video views from scratch.

The important part here is to use the track from the CallParticipant class, for each of the participants. In order to save resources (both bandwidth and memory), you should hide the track when the user is not visible on the screen, and show it again when they become visible. This is already implemented in our SDK view layouts.

RTCMTLVideoView

If you want to use WebRTC's view for displaying tracks, then you should use the RTCMTLVideoView view directly. In our SDK, we provide a subclass of this view, called VideoRenderer, which also provides access to the track.

VideoRendererView

If you are using SwiftUI, you can use our UIViewRepresentable called VideoRendererView, since it simplifies the SwiftUI integration.

For example, here's how to use this view:

VideoRendererView(
id: id,
size: availableSize,
contentMode: contentMode
) { view in
view.handleViewRendering(for: participant) { size, participant in
// handle track size update
}
}

The handleViewRendering method is an extension method from the VideoRenderer, that adds the track to the view (if needed), and reports any track size changes to the caller:

extension VideoRenderer {

func handleViewRendering(
for participant: CallParticipant,
onTrackSizeUpdate: @escaping (CGSize, CallParticipant) -> ()
) {
if let track = participant.track {
log.debug("adding track to a view \(self)")
self.add(track: track)
DispatchQueue.main.asyncAfter(deadline: .now() + 0.01) {
let prev = participant.trackSize
let scale = UIScreen.main.scale
let newSize = CGSize(
width: self.bounds.size.width * scale,
height: self.bounds.size.height * scale
)
if prev != newSize {
onTrackSizeUpdate(newSize, participant)
}
}
}
}
}

Additional participant info

Apart from the video track, we show additional information in the video view, such as the name, network quality, audio / video state etc.

If you are using our SwiftUI SDK, this is controlled by the VideoCallParticipantModifier, which can be customized by implementing the makeVideoCallParticipantModifier.

For reference, here's the default VideoCallParticipantModifier implementation, that you can use for inspiration while implementing your own modifiers:

public struct VideoCallParticipantModifier: ViewModifier {

var participant: CallParticipant
var call: Call?
var availableFrame: CGRect
var ratio: CGFloat
var showAllInfo: Bool
var decorations: Set<VideoCallParticipantDecoration>

public init(
participant: CallParticipant,
call: Call?,
availableFrame: CGRect,
ratio: CGFloat,
showAllInfo: Bool,
decorations: [VideoCallParticipantDecoration] = VideoCallParticipantDecoration.allCases
) {
self.participant = participant
self.call = call
self.availableFrame = availableFrame
self.ratio = ratio
self.showAllInfo = showAllInfo
self.decorations = .init(decorations)
}

public func body(content: Content) -> some View {
content
.adjustVideoFrame(to: availableFrame.size.width, ratio: ratio)
.overlay(
ZStack {
BottomView(content: {
HStack {
ParticipantInfoView(
participant: participant,
isPinned: participant.isPinned
)

Spacer()

if showAllInfo {
ConnectionQualityIndicator(
connectionQuality: participant.connectionQuality
)
}
}
})
}
)
.applyDecorationModifierIfRequired(
VideoCallParticipantOptionsModifier(participant: participant, call: call),
decoration: .options,
availableDecorations: decorations
)
.applyDecorationModifierIfRequired(
VideoCallParticipantSpeakingModifier(participant: participant, participantCount: participantCount),
decoration: .speaking,
availableDecorations: decorations
)
.clipShape(RoundedRectangle(cornerRadius: 16))
.clipped()
}

@MainActor
private var participantCount: Int {
call?.state.participants.count ?? 0
}
}

By default, this modifier is applied to the video call participant view:

ForEach(participants) { participant in
viewFactory.makeVideoParticipantView(
participant: participant,
id: participant.id,
availableFrame: availableFrame,
contentMode: .scaleAspectFill,
customData: [:],
call: call
)
.modifier(
viewFactory.makeVideoCallParticipantModifier(
participant: participant,
call: call,
availableFrame: availableFrame,
ratio: ratio,
showAllInfo: true
)
)
}

Note that the container above is ommited, since you can use a LazyVGrid, LazyVStack, LazyHStack or other container component, based on your UI requirements.

Mirroring the VideoRendererView

In most video calling apps, the video feed of the current user is mirrored. If you use the higher level view VideoCallParticipantView, the flipping is automatically handled, depending on whether the user is the current one. If you use the VideoRendererView directly, you would need to apply the flipping by yourself.

To mirror the view, you can use the following SwiftUI modifier:

.rotation3DEffect(.degrees(180), axis: (x: 0, y: 1, z: 0))

You can check whether the video renderer view is for the current user with the following code:

if participant.id == streamVideo.state.activeCall?.state.localParticipant?.id {
// apply the modifier
}

Did you find this page helpful?