Skip to main content

Call Transcriptions

You can transcribe calls to text using API calls or configure your call types to be automatically transcribed. When transcription is enabled automatically, the transcription process will start when the first user joins the call, and then stop when all participant have left the call.

Transcriptions are structured as plain-text JSONL files and automatically uploaded to Stream managed storage or to your own configurable storage. Websocket and webhook events are also sent when transcription starts, stops and completes.

Stream supports transcribing calls in multiple languages as well as transcriptions for closed captions. You can find more information about both later in this document.

Note we transcribe 1 dominant speaker and 2 other participants at a time

Quick Start

// starts transcription

// stops the transcription for the call

List call transcriptions

Note: transcriptions stored on Stream’s S3 bucket (the default) will be returned with a signed URL.


Delete call transcription

This endpoint allows to delete call transcription.
Please note that transcriptions will be deleted only if they are stored on Stream side (default).

An error will be returned if the transcription doesn't exist.

call.deleteTranscription({ session: '<session ID>', filename: '<filename>' })


These events are sent to users connected to the call and your webhook/SQS:

  • call.transcription_started sent when the transcription of the call has started
  • call.transcription_stopped this event is sent only when the transcription is explicitly stopped through an API call, not in cases where the transcription process encounters an error.
  • call.transcription_ready dispatched when the transcription is completed and available for download. An example payload of this event is detailed below.
  • call.transcription_failed sent if the transcription process encounters any issues.

transcription.ready event example

"type": "call.transcription_ready",
"created_at": "2024-03-18T08:24:14.769328551Z",
"call_cid": "default:mkzN17EUrgvn",
"call_transcription": {
"filename": "transcript_default_mkzN17EUrgvn_1710750207642.jsonl",
"url": "",
"start_time": "2024-03-18T08:23:27.642688204Z",
"end_time": "2024-03-18T08:24:14.754731786Z"
"received_at": "2024-03-18T08:24:14.790Z"

Transcription JSONL file format

{"type":"speech", "start_time": "2024-02-28T08:18:18.061031795Z", "stop_time":"2024-02-28T08:18:22.401031795Z", "speaker_id": "Sacha_Arbonel", "text": "hello"}
{"type":"speech", "start_time": "2024-02-28T08:18:22.401031795Z", "stop_time":"2024-02-28T08:18:26.741031795Z", "speaker_id": "Sacha_Arbonel", "text": "how are you"}
{"type":"speech", "start_time": "2024-02-28T08:18:26.741031795Z", "stop_time":"2024-02-28T08:18:31.081031795Z", "speaker_id": "Tommaso_Barbugli", "text": "I'm good"}
{"type":"speech", "start_time": "2024-02-28T08:18:31.081031795Z", "stop_time":"2024-02-28T08:18:35.421031795Z", "speaker_id": "Tommaso_Barbugli", "text": "how about you"}
{"type":"speech", "start_time": "2024-02-28T08:18:35.421031795Z", "stop_time":"2024-02-28T08:18:39.761031795Z", "speaker_id": "Sacha_Arbonel", "text": "I'm good too"}
{"type":"speech", "start_time": "2024-02-28T08:18:39.761031795Z", "stop_time":"2024-02-28T08:18:44.101031795Z", "speaker_id": "Tommaso_Barbugli", "text": "that's great"}
{"type":"speech", "start_time": "2024-02-28T08:18:44.101031795Z", "stop_time":"2024-02-28T08:18:48.441031795Z", "speaker_id": "Tommaso_Barbugli", "text": "I'm glad to hear that"}

User Permissions

The following permissions are available to grant/restrict access to this functionality when used client-side.

  • StartTranscription required to start the transcription
  • StopTranscription required to stop the transcription
  • ListTranscriptions required to retrieve the list of transcriptionss

Enabling / Disabling call transcription

Transcription can be configured from the Dashboard (see call type screen) or directly via the API. It is also possible to change the transcription settings for a call and override the default settings coming from the its call type.

// Disable on call level
settings_override: {
transcription: {
mode: VideoTranscriptionSettingsRequestModeEnum.DISABLED,

// Disable on call type level'<call type name>', {
settings: {
transcription: {
mode: VideoTranscriptionSettingsModeEnum.DISABLED,

// Enable
settings_override: {
transcription: {
mode: VideoTranscriptionSettingsRequestModeEnum.AVAILABLE,

// Other settings
settings_override: {
transcription: {
audio_only: false,
quality: VideoTranscriptionSettingsRequestQualityEnum.AUTO_ON,

Multi language support

When using out of the box, transcriptions are optimized for calls with english speakers. You can configure call transcription to optimize for a different language than english. You can also specify as secondary language as well if you expect to have two languages used simultaneously in the same call.

Please note: the call transcription feature does not perform any language translation. When you select a different language, the trascription process will simply improve the speech-to-text detection for that language.

You can set the transcription languages in two ways: either as a call setting or you can provide them to the StartTranscription API call. Languages are specified using their international language code (ISO639) Please note: we currently don’t support changing language settings during the call.

Supported languages

  • English (en) - default
  • French (fr)
  • Spanish (es)
  • German (de)
  • Italian (it)
  • Dutch (nl)
  • Portuguese (pt)
  • Polish (pl)
  • Catalan (ca)
  • Czech (cs)
  • Danish (da)
  • Greek (el)
  • Finnish (fi)
  • Indonesian (id)
  • Japanese (ja)
  • Russian (ru)
  • Swedish (sv)
  • Tamil (ta)
  • Thai (th)
  • Turkish (tr)
  • Hungarian (hu)
  • Romanian (to)
  • Chinese (zh)
  • Arabic (ar)
  • Tagalog (tl)
  • Hebrew (he)
  • Hindi (hi)
  • Croatian (hr)
  • Korean (ko)
  • Malay (ms)
  • Norwegian (no)
  • Ukrainian (uk)

Did you find this page helpful?