Events Reference

This is a list of events emitted by the SDK and the various plugin types.

This event system provides a unified, event-driven architecture that allows plugins to be composed and integrated into audio processing pipelines.

Events Emitted by Plugins

All plugins inherit from base classes (STT, TTS, VAD, STS) that define standard event patterns using AsyncIOEventEmitter. Events include rich metadata for debugging and monitoring.

Note: all audio events use PCM format.

Plugin CategoryEvent NameDescriptionNotes
All PluginserrorEmitted when an error occurs during processingIncludes connection errors, API errors, processing failures, and synthesis errors. Provides consistent error handling across the ecosystem
STT (Speech-to-Text)transcriptEmitted when a complete transcript is availableFinal transcription result with confidence scores and processing metadata
partial_transcriptEmitted when a partial transcript is available during real-time processingOnly emitted by streaming STT services like Deepgram; not by batch processors like Moonshine
TTS (Text-to-Speech)audioEmitted when an audio chunk is generatedContains raw PCM audio data ready for playback
VAD (Voice Activity Detection)audioEmitted when speech is detected and accumulatedContains complete speech segments after silence detection
partialEmitted periodically while speech is ongoingEmitted every N frames during active speech detection (configurable)
STS (Speech-to-Speech)connectedEmitted when the WebSocket connection is establishedIndicates successful handshake with OpenAI Realtime API
disconnectedEmitted when the connection is closedIndicates normal closure or error disconnection
All OpenAI eventsAll events from OpenAI Realtime API are forwarded verbatimIncludes conversation events, function calls, responses, errors, etc.

Events Emitted by the Python AI SDK

Notes

  1. Event Architecture: The RTC module uses a hierarchical event system where:
    • Low-level components emit specific events
    • Higher-level managers aggregate and forward events
    • SFU events are automatically forwarded from the signaling layer
Most SDK and SFU events can be listened to from the `ConnectionManager` class.
  1. SFU Events: All SFU (Selective Forwarding Unit) events are automatically forwarded through the signaling layer, providing real-time updates about call state, participants, media streams, and connection quality. SFU events can be listened to from the SDK as they are forwarded by the ConnectionManager class.

A full list of SFU events Stream emits is also available.

  1. Recording Events: Comprehensive recording event system supports both per-user and composite recording with detailed lifecycle tracking.

  2. Network Monitoring: Built-in network monitoring provides connectivity state changes for robust connection handling.

  3. WebRTC Integration: Events closely follow WebRTC standards for peer connection management, ICE handling, and media track lifecycle.

ComponentEvent NameDescriptionNotes
Connection Managerconnection.state_changedEmitted when connection state changesContains old and new state values
Events emitted by the SDK are forwardedIncludes SFU events, so most events can be listened for from the connection manager
Network Monitornetwork_changedEmitted when network connectivity status changesContains online (boolean) and timestamp
network_onlineEmitted when network comes onlineContains timestamp
network_offlineEmitted when network goes offlineContains timestamp
Participantsparticipant_joinedEmitted when a participant joins the callContains participant data from SFU
participant_leftEmitted when a participant leaves the callContains participant data from SFU
Peer ConnectionaudioEmitted when audio PCM data is received from a trackContains PCM data and user metadata
track_addedEmitted when a new media track is addedContains track and user metadata
Reconnection Managerreconnection_failedEmitted when reconnection attempts failContains reason for failure (timeout, error, etc.)
reconnection_successEmitted when reconnection succeedsContains strategy used and duration of reconnection process
Recording Managerrecording_startedEmitted when recording beginsContains recording types, user IDs, output directory
recording_stoppedEmitted when recording endsContains recording types, user IDs, duration
user_recording_startedEmitted when recording starts for a specific userContains user ID, track type, filename
user_recording_stoppedEmitted when recording stops for a specific userContains user ID, track type, filename, duration
composite_recording_startedEmitted when composite recording beginsContains track type, filename
composite_recording_stoppedEmitted when composite recording endsContains track type, filename, duration
recording_errorEmitted when recording encounters an errorContains error type, user ID, track type, message
SFU Events (via Signaling)subscriber_offerSDP offer for establishing subscriber PeerConnectionWebRTC signaling
publisher_answerSDP answer for publisher PeerConnectionWebRTC signaling
connection_quality_changedConnection quality updates for participantsNetwork quality metrics
audio_level_changedAudio level changes for participantsVoice activity indicators
ice_trickleICE candidate for connection establishmentWebRTC connectivity
change_publish_qualityQuality adjustment recommendationsBandwidth optimization
dominant_speaker_changedCurrent dominant speaker notificationFor spotlight/focus views
join_responseAcknowledgment of successful call joinInitial connection response
health_check_responseResponse to health check with participant countConnection monitoring
track_publishedNew track published by participantMedia stream notifications
track_unpublishedTrack no longer publishedMedia stream notifications
errorSFU error communicationConnection/processing errors
call_grants_updatedTrack publishing permissions changedPermission management
go_awayMigration instruction from SFULoad balancing/failover
ice_restartICE restart instructionConnection recovery
pins_updatedPinned participants list changedUI state management
call_endedCall termination notificationSession lifecycle
participant_updatedParticipant data updatedUser state changes
participant_migration_completeMigration process completedFailover completion
change_publish_optionsPublishing configuration changesCodec/quality updates
inbound_state_notificationInbound video state changesStream status updates
Coordinator WebSocketVarious API eventsAll Stream API events forwardedReal-time API notifications

Usage Example

from getstream.video import rtc

# Connect to a call
async with await rtc.join(call, user_id) as connection:
    # Listen for participant events
    @connection.on("participant_joined")
    async def on_participant_joined(participant):
        print(f"Participant {participant.user_id} joined")

    # Listen for audio data
    @connection.on("audio")
    async def on_audio(pcm_data, user):
        print(f"Received audio from {user}")

    # Listen for connection state changes
    @connection.on("connection.state_changed")
    async def on_connection_state_changed(event):
        print(f"Connection state: {event['old']} -> {event['new']}")

    # Wait for the connection to end
    await connection.wait()
© Getstream.io, Inc. All Rights Reserved.