iOS Livestreaming Tutorial

Let's build a simplified and enterprise-grade iOS live streaming app similar to Twitch or Facebook Live with fancy emoji reactions using SwiftUI and the iOS Video SDK of Stream

Livestream Quickstart

In this tutorial we'll quickly build a low-latency in-app livestreaming experience. The livestream is broadcasted using Stream's edge network of servers around the world. We'll cover the following topics:

  • Ultra low latency streaming
  • Multiple streams & co-hosts
  • RTMP in and WebRTC input
  • Exporting to HLS
  • Reactions, custom events and chat
  • Recording & Transcriptions

Let's get started, if you have any questions or feedback be sure to let us know via the feedback button.

Step 1 - Create a new project in Xcode

  1. Make sure you have Xcode installed and that you are running 14.3 or later
  2. Open Xcode and select "Create a new Project"
  3. Select "iOS" as the platform and "App" as the type of Application
  4. Name your project LivestreamSample and select "SwiftUI" as the interface

Step 2 - Install the SDK & Setup the client

Next you need to add our SDK dependencies to your project using Swift Package Manager from Xcode.

  1. Click on "Add packages..." from the File menu
  2. Add https://github.com/GetStream/stream-video-swift in the search bar
  3. Select "StreamVideo" and "StreamVideoSwiftUI" and then click Add Package

App Permissions

Publishing a livestream requires camera and microphone access, you need to request permissions to use them in your app. In order to do this, you will need to add the following keys to the Info.plist file.

  • Privacy - Microphone Usage Description - "LivestreamSampleApp requires microphone access in order to capture and transmit audio"
  • Privacy - Camera Usage Description - "LivestreamSampleApp needs camera access for broadcasting"

Screenshot shows permissions in the .plist file

Step 3 - Broadcast a livestream from your phone

The following code shows how to publish from your phone's camera. Let's open LivestreamSampleApp.swift and replace the LivestreamSampleApp struct with the following code:

You'll notice that these first 3 lines need their values replaced.

Replace them now with the values shown below:

Here are credentials to try out the app with:

PropertyValue
API KeyWaiting for an API key ...
Token Token is generated ...
User IDLoading ...
Call IDCreating random call ID ...

When you run the app now you'll see a text message saying: "TODO: render video". Before we get around to rendering the video let's review the code above.

In the first step we setup the user:

If you don't have an authenticated user you can also use a guest or anonymous user. For most apps it's convenient to match your own system of users to grant and remove permissions.

Next we create the client:

You'll see the userToken variable. Your backend typically generates the user token on signup or login.

The most important step to review is how we create the call. Stream uses the same call object for livestreaming, audio rooms and video calling. Have a look at the code snippet below:

To create the first call object, specify the call type as livestream and provide a unique callId. The livestream call type comes with default settings that are usually suitable for livestreams, but you can customize features, permissions, and settings in the dashboard. Additionally, the dashboard allows you to create new call types as required.

Finally, using call.join(create: true) will not only create the call object on our servers but also initiate the real-time transport for audio and video. This allows for seamless and immediate engagement in the livestream.

Note that you can also add members to a call and assign them different roles. For more information, see the call creation docs.

Step 4 - Rendering the video

In this step we're going to build a UI for showing your local video with a button to start the livestream. This example uses SwiftUI, but you could also use UIKit.

In LivestreamSampleApp.swift replace the LivestreamView with the following code:

Upon running your app, you will be greeted with an interface that looks like this:

Livestream

Stream uses a technology called SFU cascading to replicate your livestream over different SFUs around the world. This makes it possible to reach a large audience in realtime.

Now let's press Go live in the iOS app and click the link below to watch the video in your browser.

For testing you can join the call on our web-app: Join Call

State & Participants

Let's take a moment to review the SwiftUI code above. Call.state exposes all the observable objects you need. The participant state docs show all the available fields.

In this example we use:

  • call.state.backstage: a boolean that returns if the call is in backstage mode or not
  • call.state.duration: how long the call has been running
  • call.state.participantCount: the number of participants watching the livestream
  • call.state.participants: the list of participants

The call.state.participants can optionally contain more information about who's watching the stream. If you have multiple people broadcasting video this also contain the video tracks.

There are many possibilities and the participant state docs explain this in more detail.

Creating a UI to watch a livestream

The livestream layout is built using standard SwiftUI. The VideoRenderer component is provided by Stream. VideoRenderer renders the video and a fallback. You can use it for rendering the local and remote video.

Backstage mode

In the example above you might have noticed the call.goLive() method and the call.state.backstage value. The backstage functionality is enabled by default on the livestream call type. It makes it easy to build a flow where you and your co-hosts can setup your camera and equipment before going live. Only after you call call.goLive() will regular users be allowed to join the livestream.

This is convenient for many livestreaming and audio-room use cases. If you want calls to start immediately when you join them that's also possible. Simply go the Stream dashboard, click the livestream call type and disable the backstage mode.

The goLive method can also automatically start HLS livestreaming, recording or transcribing. In order to do that, you need to pass the corresponding optional parameter:

Step 4 - (Optional) Publishing RTMP using OBS

The example above showed how to publish your phone's camera to the livestream. Almost all livestream software and hardware supports RTMP. OBS is one of the most popular livestreaming software packages and we'll use it to explain how to import RTMP. So let's see how to publish using RTMPs. Feel free to skip this step if you don't need to use RTMPs.

A. Log the URL & Stream Key

For example, you can print the RTMP address right after the call.join call in the Task block:

B. Open OBS and go to settings -> stream

  • Select "custom" service
  • Server: equal to the server URL from the log
  • Stream key: empty, the key is already present in the URL

Press start streaming in OBS. The RTMP stream will now show up in your call just like a regular video participant. Now that we've learned to publish using WebRTC or RTMP let's talk about watching the livestream.

Step 5 - Viewing a livestream (WebRTC)

Watching a livestream is even easier than broadcasting.

Compared to the current code in in LivestreamSampleApp.swift you:

  • Don't need to request permissions or enable the camera
  • Don't render the local video, but instead render the remote video
  • Typically include some small UI elements like viewer count, a button to mute etc.

Additionally, you can use our default LivestreamPlayer for viewing WebRTC livestreams, as described here.

Step 6 - (Optional) Viewing a livestream with HLS

Another way to watch a livestream is using HLS. HLS tends to have a 10 to 20 seconds delay, while the above WebRTC approach is realtime. The benefit that HLS offers is better buffering under poor network conditions. So HLS can be a good option when:

  • A 10-20 second delay is acceptable
  • Your users want to watch the Stream in poor network conditions

One option to start HLS is to set the startHls parameter to true in the goLive method, as described above.

If you want to explicitly start it, when you are live, you can use the following method:

You can view the HLS video feed using any HLS capable video player.

Step 7 - Advanced Features

This tutorial covered broadcasting and watching a livestream. It also went into more details about HLS & RTMP-in.

There are several advanced features that can improve the livestreaming experience:

  • Co-hosts You can add members to your livestream with elevated permissions. So you can have co-hosts, moderators etc.
  • Custom events You can use custom events on the call to share any additional data. Think about showing the score for a game, or any other realtime use case.
  • Reactions & Chat Users can react to the livestream, and you can add chat. This makes for a more engaging experience.
  • Notifications You can notify users via push notifications when the livestream starts
  • Recording The call recording functionality allows you to record the call with various options and layouts

Recap

It was fun to see just how quickly you can build in-app low latency livestreaming. Please do let us know if you ran into any issues. Our team is also happy to review your UI designs and offer recommendations on how to achieve it with Stream.

To recap what we've learned:

  • WebRTC is optimal for latency, HLS is slower but buffers better for users with poor connections
  • You setup a call: (let call = streamVideo.call(callType: "livestream", callId: callId))
  • The call type "livestream" controls which features are enabled and how permissions are setup
  • The livestream by default enables "backstage" mode. This allows you and your co-hosts to setup your mic and camera before allowing people in
  • When you join a call, realtime communication is setup for audio & video: (call.join())
  • Observable objects in call.state and call.state.participants make it easy to build your own UI

Calls run on Stream's global edge network of video servers. Being closer to your users improves the latency and reliability of calls. The SDKs enable you to build livestreaming, audio rooms and video calling in days.

We hope you've enjoyed this tutorial and please do feel free to reach out if you have any suggestions or questions.

Final Thoughts

In this video app tutorial we built a fully functioning iOS messaging app with our iOS SDK component library. We also showed how easy it is to customize the behavior and the style of the iOS video app components with minimal code changes.

Both the video SDK for iOS and the API have plenty more features available to support more advanced use-cases.

Give us Feedback!

Did you find this tutorial helpful in getting you up and running with iOS for adding video to your project? Either good or bad, we’re looking for your honest feedback so we can improve.

Next Steps

Create your free Stream account to start building with our Video & Audio SDKs, or contact our team if you have additional questions.

Chat Messaging

Build any kind of chat messaging experience without scalability or reliability issues.

Learn more about $ Chat Messaging

Enterprise

Available 99.999% uptime SLAs and industry-leading security to power the world's largest apps.

Learn more about $ Enterprise