Changes to channel-level configuration settings take at least one minute to take effect.
Channels provide a mechanism for specifying media configuration used by client MCU and SFU connections. Channels belong to an application, and are listed at the end of the application's page.
Within the context of an application, clients can have many different use cases. Channels are the best way to meet the requirements of a given use case. For example, clients might join a channel to stream audio or video from their mic and camera, and join a different channel specifically to stream screen capture video. The first channel needs to configure audio and video encoders, but the latter doesn't need to configure audio and it may specify a different video encoding configuration. Channels support many different configuration properties.
The Channel Pattern and the Default Channel
Channels have a Channel Pattern instead of a name, and this pattern supports the wildcard character '*'. When a client requests to join a channel, the Gateway looks at all of the channels available and picks the channel that best matches. Matches are attempted in the order specified in your application's channel listing. If you want to change the attempted matching order, drag the channels into the order you want.
One special case is provided here - the Default Channel. The Default Channel has a Channel Pattern equal to the wildcard character '*'. As you might imagine, this matches anything and everything. The Default Channel is provided for you when you create an application. The Default Channel is always last in the list of an application's channels, which means that it matches last. It's a catch-all, which ensures your clients always have a Channel Configuration applied to their connections. You can configure your Default Channel to meet your most general requirements.
Specify the audio codecs that you want to use in your app. By default, most audio codecs are enabled. If you disable too many codecs, the connection may fail if there are no available codecs. The supported codecs are:
You should always enable Opus. Opus can efficiently handle most use cases, from low-bandwidth calls to high-quality music. Set the maximum allowed bitrate in Kbps for Opus.
The Max Bitrate is the downstream rate which means if you specify 1024, this is the maximum rate that's sent in the downstream connection.
Specify the video codecs that you want to use in your app and, if necessary, the maximum bitrate in Kbps for each. By default, all video codes are enabled. If you disable too many video codecs, the connection might fail if there are no available codecs. The supported codecs are:
The Max Bitrate is the downstream rate which means if you specify 1024, this is the maximum rate that's sent in the downstream connection.
Audio/Video Mute Configuration
Configure whether clients have their audio or video output muted by default when they join a channel and whether clients can unmute themselves. The values set for mute and unmute configurations on the console are initial values for new sessions on the channel. Use the REST API or client SDK to change these values dynamically in a live session.
To configure audio and video mute and unmute settings, set the following:
- Audio Muted: Mute the audio of clients when they join the channel.
- Audio Unmute Disabled: Disable clients' ability to unmute their audio.
- Video Muted: Mute the video of clients when they join the channel.
- Video Unmute Disabled: Disable clients' ability to unmute their video.
Set configuration options for Selective Forwarding Unit (SFU) connections. For more information on SFU connections, refer to the SFU section in the Stream Using SFU and MCU topic.
Limit SFU Upstream Bitrate
Set the maximum upstream audio bitrate and video bitrate by entering your desired values in Kbps in the Audio Input Max Bitrate and Video Input Max Bitrate fields. If not set, the default values are used.
Configure SFU Client Simulcast
LiveSwitch supports client-side simulcast, which allows multiple encoding per stream. In the Client Simulcast Config section, specify the maximum number of encodings per audio and video input.
Configure SFU Server Simulcast
LiveSwitch supports server-side simulcast configuration for upstream video connections. When enabled, if the encoding count is greater than 1, the Media Server automatically generates lower-quality simulcast video streams using the degradation preference specified. The original video stream passes through the Media Server unimpeded for the highest quality.
The Video Encoding Count minus 1 is the number of lower-quality simulcast video streams that the Media Server generates. For example, if you enter 3, LiveSwitch forwards the source high-quality video encoding and generates two lower-quality video encodings.
The supported Degradation Preferences are:
Automatic: Automatically degrade based on the video type
- Degrade the frame rate for screen video type
- Degrade resolution for camera video type
- Degrade any other video type using a balanced approach
Balanced: Degrade both the frame rate and resolution together in smaller increments
Frame Rate: Degrade frame rate which is preferred for screen video type
Resolution: Degrade resolution which is preferred for camera video type
MCU stands for Multipoint Control Unit. MCU Configuration allows you to set configuration options for MCU connections. For more information on MCU connections, refer to the MCU section in the Stream Using SFU and MCU topic.
Limit MCU Upstream Bitrate
Set the maximum upstream audio bitrate and video bitrate by entering your desired values in Kbps in the Audio Input Max Bitrate and Video Input Max Bitrate properties. If not set, the default values are used.
Define Output Frame Resolution and Rate
To define the MCU video output frame resolution and rate, enter the desired value in the Video Output Width, Video Output Height, and Video Output Frame Rate properties.
You should use the common resolutions because not all combinations are valid for all codecs. For example, 147x933 isn't a valid resolution. You should also keep your frame rate to a reasonable value, that's less than 30 frames per second.
Control MCU Layout
You can specify the margin between the frames in the output video in the Video Input Margin property.
If the input video frame is smaller than the output video frame, the output video fills the space with black bars. You can enable Crop Input Videos to enlarge the input video to fit the output video frame.
The Per Connection Video Output property removes a user's upstream video from the mixed downstream video that they receive. Instead of creating a single mixed video stream for all participants, it will create a unique mixed video stream for each participant which excludes that participant's video.
Per Connection Video Output is a huge increase to the CPU load of the Media Server. We do not recommend enabling this value except in very specific circumstances.
layout function. Your function can return a static layout or generate a dynamic layout based on the passed parameters.
LiveSwitch's MCU supports Skia graphics rendering engine. You can configure Skia in the Skia Config section.
- Enable Antialias to smooth jagged edges in the video image.
- Enable Dither to make the low-resolution image look better.
- Select the desired Filter Quality.
Configure MCU Simulcast
LiveSwitch version 1.9.0 and higher supports Simulcast, which allows multiple encoding per stream. In the Simulcast Config section, you can specify the maximum number of encodings per audio and video input. You can also set the bitrate, frame rate, and video scaling for each video encoding.
Bandwidth Adaptation Policy
The Bandwidth Adaptation Policy property is set as Enabled by default. This enables bandwidth adaptation for the SFU and MCU connections. In the LiveSwitch client SDK, the
VideoStream is also
Enabled by default. The
BandwidthAdaptationPolicy can be
Disabled but you aren't recommended to do so.
- Bandwidth adaptation is enabled by default in WebRTC-capable browsers, such as Chrome, Firefox, Edge, and Safari.
- LiveSwitch client SDK doesn't support bandwidth adaptation for AudioStreams.
Connection Statistics Interval
The Connection Statistics Interval property determines the granularity in milliseconds at which connection statistics are sampled.
It's highly recommended not to set this value too low. Setting it to less than 1000 ms could have severe CPU performance implications.
LiveSwitch supports Media Server clustering, which allows multiple media servers to work together to handle higher traffic volumes than they would be able to alone. Clustering is enabled by default, and all media servers that are connected to a gateway or pool of gateways automatically attempt to cluster.
Clustering allocation strategy set at the channel-level configuration overrides the clustering strategy set at the deployment level. For broadcast-style conferences, it's recommended to set the channel-level clustering allocation strategy as Round Robin strategy and for a huddle, use Spill Over within the same cluster.
Allocation Strategy - You have the following options for the allocation strategy that are employed to assign clients to a clustered Media Server:
- Don't Override - Equivalent to unset; inherits the value from the deployment level. This is the default value.
- Round Robin - Sequentially go through each Media Server to assign the next client.
- Spill Over - Send all clients to the same Media Server until a threshold is met and then switch to the next Media Server.
- Least Connections - Send the next client to the Media Server with the least number of active connections.
Configure whether to enable recording in the channel.
Recording is only available for SFU or MCU connections. You can't record media for a peer-to-peer connection because participants aren't connected to the Media Server in a peer-to-peer session.
Before you begin, ensure that you have configured the recording path in your Deployment Configuration. Once your audio and video stream recording path and strategy are set up in your Deployment Configuration, enable the Audio/Video Recording setting in the Channel Configuration. If the Audio/Video Recording setting is enabled, recording will automatically start as soon as the first connection is established on a channel. These recordings can still be stopped and subsequently started through the SDK or REST API.
In the client application, depending on the channel pattern you selected for recording, the clients can join that channel. After the last client leaves the channel, the channel closes and the recording files get saved at the path you have set up in your Deployment Configuration.
If you enable the Audio/Video Recording setting on the channel that's already active, that channel won't be affected. The changes take effect when the active channel closes and then it restarts.
You can now start or stop a recording on a channel. If you want to record a single session, you can use the SDK or REST API for a specific user, device, or connection. See Start or Stop a Recording on a Channel.
Analyze Your Media Flow
Use the Hexdump control to debug or analyze your media flow using your recordings. If you enable it, in addition to the recorded media files, the Media Server generates unencrypted audio and video Real-time Transport Protocol (RTP) packets to an ASCII hexdump file. The hexdump format is compatible with WireShark, which is a tool for analyzing network traffic. WireShark includes the text2pcap tool to convert the hexdump files into the PCAP files that WireShark can read. WireShark reads the files and outputs a graphical interface for you to analyze the media flow.
SIP Outbound Caller ID Configuration
SIP Outbound Caller ID Configuration is available in the SIP Connector documentation.
Webhooks are user-defined HTTP callbacks that can send event-triggered notifications or updates to an external web server. You can use webhooks to hook your own processing into LiveSwitch events. LiveSwitch sends out webhook event updates as an HTTP POST request with the body formatted in JSON.
To add a new webhook, click NEW and set the following:
- Name: Name of the webhook; it doesn't have to be unique.
- URL: The server to POST event information to. Both HTTP and HTTPS are supported, but HTTPS is recommended.
- Event Category: Choose which level the webhook applies to:
- Connection (application-level)
- Client (client-level)
- Channel (channel-level)
- Event Lifecycle: The events available for the given category. When triggered, events send a POST to your configured webhook.
- Batch: Turn on the control to POST multiple events in a single call. Otherwise, each event results in a separate call.
- Disabled: Turn on the control to disable all events.
For information on sample JSON object for each event, see Webhooks in the Developer Guides.
Configure the Dual-Tone Multi-Frequency (DTMF) sequences that clients in the channel can use. DTMF sequences are primarily used by clients on telephone devices connected through SIP.
To configure DTMF sequences, set the following:
- Enable DTMF Control: Set whether DTMF sequences are enabled for clients.
- Finish Sequence: Set the sequence that a client enters to indicate they finished their DTMF input.
- Mute Audio Sequence: Set the sequence that a client enters to mute their audio output.
- Unmute Audio Sequence: Set the sequence that a clients enters to unmute their audio output.
Valid DTMF sequences are digits from 0 to 9, #, and *. A,B,C, and D aren't valid DTMF sequences.
Recording Management Override
By default, a channel uses the application recording management settings. However, you can override them to have channel-specific recording management settings.
- The maximum number of upstream connections that can be part of a muxed session depends on the number of CPU cores on the server where the Recording Muxer service is running. Each core can handle a maximum of 4 upstream connections. For example, on an 8 core machine (AWS c5.2xlarge), the maximum number of upstream connections for a muxing session is 32.
- The minimum width x height for muxed recordings is 160 x 120 (120p).
- The maximum width x height for muxed recordings is 1920 x 1080 (1080p).
- The maximum frame rate is 60fps.
Set the number of days that it takes a recording to expire. If set, the expired recordings are deleted; else they don't expire.
Having Mux enabled will take multiple files from a recording session and produce a single muxed recording file. When enabled, specify the following:
- Width: Width of the output video. Defaults to
- Height: Height of the output video. Defaults to
- Frame Rate: Frames per second of the output video. Defaults to
- Background Color: Hexadecimal code for the video background color. Defaults to
LiveSwitch supports sending RTMP streams to other servers that allow RTMP ingest for your application or channel.
RTMP supports up to five upstream participants.
To enable RTMP ingest from the console, do the following:
Enter the connection URL of the service you want to stream to.
You must enter the RTMP protocol you are using in the connection URL. LiveSwitch supports RTMP and RTMPS.
RTMP URL can only be used by one active channel at a time. For more details, see RTMP Simultaneous Stream.
Enter the stream key of the service you want to stream to.
To automatically start RTMP when the channel is activated, turn the toggle switch on. To manually start RTMP through an SDK, turn the toggle switch off.
After you submit your changes to the console, do one of the following to start the RTMP stream on your channel:
- If your channel is active, restart it.
- If your channel is active but you can't restart it, start the RTMP stream with the LiveSwitch SDK.
- If your channel isn't active, activate it.
RTMP Simultaneous Stream
When configuring more than one channel, ensure that you are using different RTMP URLs. If you use the same RTMP URL in multiple channels, then a stream only starts for the first channel that attempts to activate the stream. The second stream to the same RTMP URL fails to establish a connection; however, the users can still connect. This doesn't impact the first stream. If you require RTMP to start on the second channel, you must deactivate the first channel and then you need to manually reactivate the RTMP stream on the second channel.
If your session requires clustering, the initial Media Server with an RTMP connection is prioritized. The subsequent connections and activated Media Server defers to the active RTMP stream as LiveSwitch allows only one attempt to start RTMP on the Media Server. When a channel is activated and streaming starts, RTMP is enabled regardless of any RTMP stream failures. When additional users join the channel on the same Media Server, additional attempts aren't made to start RTMP.
When the channel activates on a new Media Server which is unaware of any active RTMP streams from another source and attempts to establish an RTMP stream, the new RTMP connection fails and RTMP is flagged as enabled. However, the RTMP stream that is active on the initial Media Server is maintained.