const call = useCall();
const camera = call.camera;Camera & Microphone
Handling audio and video devices in your application means working with MediaStream, MediaDeviceInfo and other WebRTC API objects.
To simplify this, we hide all the complexity inside the SDK and export utility functions and states.
In this guide, we shall go over their usage.
Camera management
The SDK does its best to make working with the camera easy. We expose the following camera object on the call:
Call settings
The default state of the camera is determined by the settings in the call object.
import { useCallStateHooks } from "@stream-io/video-react-native-sdk";
const { useCallSettings } = useCallStateHooks();
const settings = useCallSettings();
console.log(settings?.video.camera_default_on);Make sure, call.get() is called at least once in the application, after the call is created.
Start-Stop Camera
We can use the functions camera.enable() and camera.disable() to control the publishing and unpublishing our video stream.
Alternatively, you can use camera.toggle().
import { useCall, useCallStateHooks } from "@stream-io/video-react-native-sdk";
const call = useCall();
const { useCameraState } = useCallStateHooks();
const { camera, isMute } = useCameraState();
console.log(`Camera is ${isMute ? "off" : "on"}`);
await camera.toggle();
// or, alternatively
await camera.enable();
await camera.disable();It's always best to await calls to enable(), disable(), and toggle(), however the SDK does its best to resolve potential race conditions: the last call always wins, so it's safe to make these calls in an event handler.
The status is updated once the camera is actually enabled or disabled.
Use optimisticIsMute for the "optimistic" status that is updated immediately after toggling the camera.
Manage Camera Facing Mode
We can get the facing mode state of the camera by:
import { useCallStateHooks } from "@stream-io/video-react-native-sdk";
const { useCameraState } = useCallStateHooks();
const { direction } = useCameraState(); // direction returns 'front' or 'back'.We can toggle the camera face from front to back and vice versa using camera.flip().
import { useCallStateHooks } from "@stream-io/video-react-native-sdk";
const { useCameraState } = useCallStateHooks();
const { camera } = useCameraState();
camera.flip();Video mute status
We can get the mute state of our video stream by checking the status value returned from the useCameraState hook:
import { useCallStateHooks } from "@stream-io/video-react-native-sdk";
const { useCameraState } = useCallStateHooks();
const { status } = useCameraState(); // status returns enabled, disabled or undefinedShow Video Preview
We can get the video stream from the camera using the media stream from the call.camera object and show it using the RTCView component from @stream-io/react-native-webrtc library:
import { useCallStateHooks } from "@stream-io/video-react-native-sdk";
import { RTCView } from "@stream-io/react-native-webrtc";
const { useCameraState } = useCallStateHooks();
const { camera } = useCameraState();
const localVideoStream = camera.state.mediaStream;
return <RTCView streamURL={localVideoStream?.toURL()} />;Access to the Camera's MediaStream
Our SDK exposes the current mediaStream instance that you can use for your needs (for example, local recording, etc...):
import { useCallStateHooks } from "@stream-io/video-react-native-sdk";
const { useCameraState } = useCallStateHooks();
const { mediaStream } = useCameraState();
const [videoTrack] = mediaStream.getVideoTracks();Microphone management
The SDK does its best to make working with the microphone easy. We expose the following microphone object on the call:
const call = useCall();
const microphone = call.microphone;Call settings
The default state of the microphone is determined by the settings in the call object.
import { useCallStateHooks } from "@stream-io/video-react-native-sdk";
const { useCallSettings } = useCallStateHooks();
const settings = useCallSettings();
console.log(settings?.audio.mic_default_on);Make sure, call.get() is called at least once in the application, after the call is created.
Start-Stop Microphone
We can use the functions microphone.enable() and microphone.disable() to control the publishing and unpublishing our audio stream:
Alternatively, you can use microphone.toggle().
import { useCallStateHooks } from "@stream-io/video-react-native-sdk";
const { useMicrophoneState } = useCallStateHooks();
const { microphone, isMute } = useMicrophoneState();
console.log(`Microphone is ${isMute ? "off" : "on"}`);
await microphone.toggle();
// or, alternatively
await microphone.enable();
await microphone.disable();It's always best to await calls to enable(), disable(), and toggle(), however the SDK does its best to resolve potential race conditions: the last call always wins, so it's safe to make these calls in an event handler.
The status is updated once the microphone is actually enabled or disabled. Use optimisticIsMute for the "optimistic" status that is updated immediately after toggling the microphone.
Audio mute status
We can get the mute state of our audio stream by checking the status value returned from the useMicrophoneState hook:
import { useCallStateHooks } from "@stream-io/video-react-native-sdk";
const { useMicrophoneState } = useCallStateHooks();
const { status } = useMicrophoneState(); // status returns enabled, disabled or undefinedSpeaking while muted detection
Our SDK provides a mechanism that can detect whether the user started to speak while being muted. Through this mechanism, you can display a notification to the user, or apply any custom logic.
This feature is enabled by default unless the user doesn't have the permission to send audio or explicitly disabled.
import { useCallStateHooks } from "@stream-io/video-react-native-sdk";
const { useMicrophoneState } = useCallStateHooks();
const { isSpeakingWhileMuted, microphone } = useMicrophoneState();
if (isSpeakingWhileMuted) {
// your custom logic comes here
console.log("You are speaking while muted!");
}
// to disable this feature completely:
await microphone.disableSpeakingWhileMutedNotification();
// to enable it back:
await microphone.enableSpeakingWhileMutedNotification();Access to the Microphone's MediaStream
Our SDK exposes the current mediaStream instance that you can use for your needs (for example, local recording, etc...):
import { useCallStateHooks } from "@stream-io/video-react-native-sdk";
const { useMicrophoneState } = useCallStateHooks();
const { mediaStream } = useMicrophoneState();
const [audioTrack] = mediaStream.getAudioTracks();Speaker management
The SDK automatically applies the audio.default_device call type setting (either speaker or earpiece) to determine the default audio device to which the audio is played.
The following device priority is used:
- Bluetooth Headset or Wired Headset
- Speakerphone or Earpiece.
However, if you need to manually override the audio.default_device setting, you can use the start() method from the callManager module from the SDK. The start() method must be called before join() method call. Otherwise, it's ignored as the default audio settings must be applied before joining the call. The overriding is especially useful in livestream usecases as explained in the next section.
import { callManager } from "@stream-io/video-react-native-sdk";
const call = client.call(callType, callId);
// To be called before joining a call
callManager.start({
audioRole: "communicator", // or "listener"
deviceEndpointType: "speaker", // or "earpiece"
});
await call.join();audioRole:
communicator(default) orlistener. Uselistenerfor users that won't publish audio but only listen (typical for livestream audience). Usecommunicatorotherwise.deviceEndpointType:
speakerorearpiece. Available only whenaudioRoleis set tocommunicator.speakerenables the loudspeaker when no bluetooth device or other wired headset is connected.earpieceroutes the audio through the ear-speaker unless another external device is connected.earpieceshould be passed only on a scenario of an audio call similar to a mobile cellular phone call.
As platform-specific methods are necessary to handle audio output, we do not support the useSpeakerState() hook.
Livestream or listener-only audio management
By default, the SDK sets the audioRole to communicator. It prioritizes low latency and allows manual audio device switching with the SDK.
However, in case of calls where the users won't be sending audio tracks such as livestream calls, low latency audio streaming is not necessary. If the audioRole is set to listener, the SDK will configure the audio session of the device to prioritise high quality audio streaming. Additionally, in case the call can have stereo audio streams such as those sent from OBS. We can also enable the stereo audio output in the device.
import { callManager } from "@stream-io/video-react-native-sdk";
const call = client.call("livestream", callId);
// To be called before joining a call
callManager.start({
audioRole: "listener",
enableStereoAudioOutput: true, // or false (default is false)
});
await call.join();Switching audio output device
The API for switching the audio output device is vastly different on iOS and Android.
In iOS, use the following method to open the popover from AVRoutePickerView that is presented by the system.
The popover will show the audio devices that are available, and the user can tap on the list to choose the desired audio device.
import { callManager } from "@stream-io/video-react-native-sdk";
callManager.ios.showDeviceSelector();Once this method is called, a popover similar to the on below will appear.

In Android, there is no system-wide audio picker view that we can use. Instead, Android exposes the device names and provides an API to switch to a specific audio device. The SDK internally manages the state in the native to provide a seamless way of switching audio devices on Android.
- getAudioDeviceStatus(): Returns the current audio status.
- selectAudioDevice(endpointName: string): Method to switch to the specific audio device.
- addAudioDeviceChangeListener(): Adds a listener for listening to changes in the audio statuses.
Here is a minimal snippet showing how these APIs can be used in a component:
import { useEffect, useState } from "react";
import {
AudioDeviceStatus,
callManager,
} from "@stream-io/video-react-native-sdk";
const [audioDeviceStatus, setAudioDeviceStatus] = useState<AudioDeviceStatus>();
useEffect(() => {
// set the initial value and listen for changes
callManager.android.getAudioDeviceStatus().then(setAudioDeviceStatus);
return callManager.android.addAudioDeviceChangeListener(setAudioDeviceStatus);
}, []);
const {
devices, // ["Pixel Buds Pro 2", "Wired Headset", "Speaker", "Earpiece"]
selectedDevice, // "Wired Headset"
currentEndpointType, // "speaker" or "earpiece"
} = audioDeviceStatus ?? {};
// switch to a specific audio device
callManager.android.selectAudioDevice(devices[0]);When callManager.android.selectAudioDevice() is used. The SDK will persist the selection even if a new external device is connected.
The selection is dropped automatically only when the selected device is disconnected.
Here is a preview of a custom modal based audio device-picker component built using the above APIs.

Force audio through the loudspeaker
A common use case is to allow switching the audio through the loudspeaker and ear speaker, e.g.: through a button toggle. To support this behavior, you can use the following method on both iOS and Android:
import { callManager } from "@stream-io/video-react-native-sdk";
// route audio through loud speaker immediately (audio outputs here until a new external device is connected)
callManager.speaker.setForceSpeakerphoneOn(true);
// revert back to default behaviour
callManager.speaker.setForceSpeakerphoneOn(false);Audio volume control
The SDK supports both system-wide audio volume control and individual track level audio volume control.
System wide mute and unmute of audio volume
import { callManager } from "@stream-io/video-react-native-sdk";
// to mute audio
callManager.setMute(true);
// to unmute audio
callManager.setMute(false);Participant volume control
We also support setting a participant audio volume.
Here is a small snippet of how to set a participant volume to 50%:
import {
StreamVideoParticipant,
Call,
} from "@stream-io/video-react-native-sdk";
let participant: StreamVideoParticipant; // the intended participant
let call: Call; // the call instance
call.speaker.setParticipantVolume(participant.sessionId, 0.5);