React Native Livestreaming Tutorial
In this tutorial, we'll quickly build a low-latency in-app livestreaming experience. The livestream is broadcasted using Stream's edge network of servers around the world.
We'll cover the following topics:
- Ultra low latency streaming
- Multiple streams & co-hosts
- RTMP in and WebRTC input
- Exporting to HLS
- Reactions, custom events and chat
- Recording & Transcriptions
Let's get started, if you have any questions or feedback be sure to let us know via the feedback button.
Step 1 - Setup a new React Native app
Confused about "Step 1 - Setup a new React Native app"?
Let us know how we can improve our documentation:
Create a new React Native app using the official template,
Step 2 - Install the SDK and declare permissions
Confused about "Step 2 - Install the SDK and declare permissions"?
Let us know how we can improve our documentation:
In order to install the Stream Video React Native SDK, run the following command in your terminal of choice:
The SDK requires installing some peer dependencies. You can run the following command to install them:
Declare permissions
- Android
- iOS
In AndroidManifest.xml
add the following permissions before the application
section.
If you plan to also support Bluetooth devices then also add the following.
Android Specific installation
In android/build.gradle
add the following inside the buildscript
section:
In android/app/build.gradle
add the following inside the android
section:
In android/gradle.properties
add the following:
Run the app
To ensure the best possible experience, we highly recommend running the app on a physical device. This is due to the limitations in audio and video device support on emulators. You can refer to the React Native documentation for guidance on running the app on a physical device.
However, if you still prefer to use an emulator, execute the following command:
Step 3 - Broadcast a livestream from your device
Confused about "Step 3 - Broadcast a livestream from your device"?
Let us know how we can improve our documentation:
The following code shows how to publish from your device's camera.
Let's open App.tsx
and replace its contents with the following code:
If you run the app now, you'll see that the client fails to connect. To fix that, we need to provide valid credentials. They shall be set in the following variables:
Replace them now with the appropriate values from the table below:
Here are credentials to try out the app with:
Property | Value |
---|---|
API Key | Waiting for an API key ... |
Token | Token is generated ... |
User ID | Loading ... |
Call ID | Creating random call ID ... |
When you run the app now you'll see a text message saying: "TODO: render video". Before we get around to rendering the video let's review the code above.
In the first step, we set up the user:
If you don't have an authenticated user you can also use a guest or anonymous user. For most apps, it's convenient to match your own system of users to grant and remove permissions.
Next, we initialize the client by passing the API Key, user and user token:
Your backend typically generates the user
and token
on sign-up or login.
The most important step to review is how we create the call. Stream uses the same call object for livestreaming, audio rooms and video calling. Have a look at the code snippet below:
To create the first call object, specify the call type as livestream and provide a unique callId
. The livestream
call type comes with default settings that are usually suitable for live streams, but you can customize features, permissions, and settings in the dashboard. Additionally, the dashboard allows you to create new call types as required.
Finally, using call.join({ create: true })
will not only create the call object on our servers but also initiate the real-time transport for audio and video. This allows for seamless and immediate engagement in the live stream.
Note that you can also add members to a call and assign them different roles. For more information, see the Joining & Creating Calls docs.
You can also add members to a call and assign them different roles.
Also, in production-grade apps, you'd typically store the call
instance in a state variable and take care of correctly disposing it.
Read more in our Joining and Creating Calls guide.
Step 4 - Rendering the video
Confused about "Step 4 - Rendering the video"?
Let us know how we can improve our documentation:
In this step, we're going to build a UI for showing your local video with a button to start the livestream.
In App.tsx
replace the LivestreamUI
component with the following code:
Upon running your app, you will be greeted with an interface that looks like this:
Alternatively, you can utilize the HostLivestream
component from our SDK, which offers a preconfigured user interface. You have the flexibility to apply your own customizations on top of this default UI.
To do that, you can follow this code snippet:
Stream uses a technology called SFU cascading to replicate your livestream over different SFUs around the world. This makes it possible to reach a large audience in real-time.
Now let's press Start Livestream in your new app and click the Join Call button below to watch your livestream in another tab in your browser:
Let's take a moment to review the code above.
State & Participants
You can see we use a few hooks to get the call state. The ones used in this example are:
In this example, we use:
useParticipantCount
: the number of participants watching the livestreamuseLocalParticipant
: the local participant object, to get the video stream.useIsCallLive
: a boolean that returns if the call is in backstage mode or not
There are many possibilities and the call and participant state docs explain this in more detail.
Creating a UI to watch a livestream
The layout is built using standard React Native components.
The RTCView
component is provided by @stream-io/react-native-webrtc
library. You can use it for rendering the local and remote video.
Backstage mode
In the example above you might have noticed the call.goLive()
method and the isCallLive
value.
The backstage functionality is enabled by default on the livestream
call type.
It makes it easy to build a flow where you and your co-hosts can setup your camera and equipment before going live.
Only after you execute call.goLive()
will regular users be allowed to join the livestream.
This is convenient for many livestreaming and audio-room use cases. If you want calls to start immediately when you join them that's also possible. Simply go to the Stream dashboard, click the livestream call type and disable the backstage mode.
The call.goLive
method can also automatically start HLS livestreaming, recording or transcribing. To do that, you need to pass the corresponding optional parameter:
Step 5 - (Optional) Publishing RTMP using OBS
Confused about "Step 5 - (Optional) Publishing RTMP using OBS"?
Let us know how we can improve our documentation:
The example above showed how to publish your device camera to the livestream. Almost all livestream software and hardware supports RTMPS. OBS is one of the most popular livestreaming software packages and we'll use it to explain how to import RTMPS. So let's see how to publish using RTMPs. Feel free to skip this step if you don't need to use RTMPs.
Log the URL & Stream Key
Open OBS and go to settings -> stream
- Select "custom" service
- Server: equal to the
rtmpURL
from the log - Stream key: equal to the
streamKey
from the log
Press start streaming in OBS. The RTMP stream will now show up in your call just like a regular video participant.
Now that we've learned to publish using WebRTC or RTMP let's talk about watching the livestream.
Step 6 - Viewing a livestream (WebRTC)
Confused about "Step 6 - Viewing a livestream (WebRTC)"?
Let us know how we can improve our documentation:
Watching a livestream is even easier than broadcasting.
Compared to the current code in App.tsx
you:
- Don't render the local video, but instead render the remote videos -
useRemoteParticipants
instead ofuseLocalParticipant
- Typically include some small UI elements like viewer count, a button to mute etc.
Additionally, you can use the ViewerLivestream
component from our SDK, which offers the stream viewing capabilities along side the flexibility to apply your own customizations on top of this default UI.
To do that, you can follow this code snippet:
Step 7 - (Optional) Viewing a livestream with HLS
Confused about "Step 7 - (Optional) Viewing a livestream with HLS"?
Let us know how we can improve our documentation:
Another way to watch a livestream is using HLS. HLS tends to have a 10 to 20 seconds delay, while the above WebRTC approach is real-time. The benefit that HLS offers is better buffering under poor network conditions. So HLS can be a good option when:
- A 10-20 second delay is acceptable
- Your users want to watch the Stream in poor network conditions
One option to start HLS is to set the start_hls
parameter to true
in the call.goLive()
method, as described above.
If you want to explicitly start it, when you are live, you can use the following method:
You can view the HLS video feed using any HLS capable video player. The most popular one on React Native is the one provided by react-native-video
library.
Step 8 - Advanced Features
Confused about "Step 8 - Advanced Features"?
Let us know how we can improve our documentation:
This tutorial covered broadcasting and watching a livestream. It also went into more details about HLS & RTMP-in.
There are several advanced features that can improve the livestreaming experience:
- Co-hosts You can add members to your livestream with elevated permissions. So you can have co-hosts, moderators etc.
- Custom events You can use custom events on the call to share any additional data. Think about showing the score for a game, or any other real-time use case.
- Reactions & Chat Users can react to the livestream, and you can add chat. This makes for a more engaging experience.
- Recording The call recording functionality allows you to record the call with various options and layouts.
- Notifications You can notify users via push notifications when the livestream starts.
Recap
Confused about "Recap"?
Let us know how we can improve our documentation:
It was fun to see just how quickly you can build in-app low-latency livestreaming. Please do let us know if you run into any issues. Our team is also happy to review your UI designs and offer recommendations on how to achieve them with Stream.
To recap what we've learned:
- WebRTC is optimal for latency, HLS is slower but buffers better for users with poor connections
- You setup a call:
const call = client.call("livestream", callId)
- The call type
"livestream"
controls which features are enabled and how permissions are setup - The livestream by default enables "backstage" mode. This allows you and your co-hosts to setup your mic and camera before allowing people in
- When you join a call, real-time communication is setup for audio & video:
await call.join()
- Call state
call.state
and helper state access hooks make it easy to build your own UI
We've used Stream's Livestream API, which means calls run on a global edge network of video servers. By being closer to your users the latency and reliability of calls are better. The React SDK enables you to build in-app video calling, audio rooms and livestreaming in days.
We hope you've enjoyed this tutorial and please do feel free to reach out if you have any suggestions or questions.
Final Thoughts
In this video app tutorial we built a fully functioning React Native messaging app with our React Native SDK component library. We also showed how easy it is to customize the behavior and the style of the React Native video app components with minimal code changes.
Both the video SDK for React Native and the API have plenty more features available to support more advanced use-cases.
Give us Feedback!
Did you find this tutorial helpful in getting you up and running with React Native for adding video to your project? Either good or bad, we’re looking for your honest feedback so we can improve.