Search Results for

    Show / Hide Table of Contents

    Create Custom Sources and Sinks

    Note

    This API is not available for JavaScript. Instead, you can provide any HTML5 MediaStream to the LocalMedia constructor as the audio or video parameter to be used as a custom source. Custom sinks in JavaScript are not supported.

    LiveSwitch uses the concepts of sources and sinks:

    • A source captures data sent to another client.
    • A sink renders data received from another client.

    LiveSwitch provides several implementations of sources and sinks for common use cases. However, you can implement your own sources and sinks for specific use cases that LiveSwitch doesn't support by default, such as streaming received video to other devices.

    Audio Formats

    Each AudioSource and AudioSink instance has an associated AudioFormat instance.

    An audio format consists of a clock rate, in Hz, and the number of audio channels. It indicates the following:

    • For a source: the format of the audio raised by the source as output.
    • For a sink: the format of the audio processed by the sink as input.

    You need to specify AudioFormat instances for your sources and sinks. For your convenience, LiveSwitch provides a number of pre-defined formats that you can use directly.

    The following code example creates several AudioFormat instances:

    • CSharp
    • Android
    • iOS
    var opusFormat = new FM.LiveSwitch.Opus.Format();
    var pcmaFormat = new FM.LiveSwitch.Pcma.Format();
    var pcmuFormat = new FM.LiveSwitch.Pcmu.Format();
    
    fm.liveswitch.opus.Format opusFormat = new fm.liveswitch.opus.Format();
    fm.liveswitch.pcma.Format pcmaFormat = new fm.liveswitch.pcma.Format();
    fm.liveswitch.pcmu.Format pcmuFormat = new fm.liveswitch.pcmu.Format();
    
    var opusFormat = FMLiveSwitchOpusFormat()
    var pcmaFormat = FMLiveSwitchPcmaFormat()
    var pcmuFormat = FMLiveSwitchPcmuFormat()
    

    Another commonly used format is the Pcm.Format class. This format specifies a generic PCM format with a custom clock rate and audio channel count.

    The following code example creates a 48,000 Hz, 2 channel audio format instance.

    • CSharp
    • Android
    • iOS
    var pcmFormat = new FM.LiveSwitch.Pcm.Format(48000, 2);
    
    fm.liveswitch.pcm.Format pcmFormat = new fm.liveswitch.pcm.Format(48000, 2);
    
    var pcmFormat = FMLiveSwitchPcmFormat(clockRate: 48000, channelCount: 2)
    

    For the complete list of pre-defined audio formats, refer to the Client API Reference.

    Video Formats

    Similar to the audio formats, each VideoSource and VideoSink instance has an associated VideoFormat instance. A video format consists of a clock rate and information about the color space of the format. For your convenience, LiveSwitch provides a number of pre-defined formats that you can use directly.

    The following code example creates the two most common video formats: RGB and I420.

    • CSharp
    • Android
    • iOS
    var rgbFormat = FM.LiveSwitch.VideoFormat.Rgb;
    var i420Format = FM.LiveSwitch.VideoFormat.I420;
    
    fm.liveswitch.VideoFormat rgbFormat = fm.liveswitch.VideoFormat.getRgb();
    fm.liveswitch.VideoFormat i420Format = fm.liveswitch.VideoFormat.getI420();
    
    var rgbFormat = FMLiveSwitchVideoFormat.rgb()
    var i420Format = FMLiveSwitchVideoFormat.i420()
    

    Custom Sources

    To create a custom audio or video source, first create a class that extends either the AudioSource or VideoSource class. Neither of these classes have a default constructor. They require you to specify either an AudioFormat or an VideoFormat instance. Most custom sources are designed for a specific output format. It's common to create a default constructor that invokes the base constructor with a pre-defined format. The following code demonstrates this.

    • CSharp
    • Android
    • iOS
    public class CustomAudioSource : FM.LiveSwitch.AudioSource
    {
        public CustomAudioSource()
            : base(new FM.LiveSwitch.Pcm.Format(48000, 2))
        {
        }
    }
    
    public class CustomVideoSource : FM.LiveSwitch.VideoSource
    {
        public CustomVideoSource()
            : base(FM.LiveSwitch.VideoFormat.Rgb)
        {
        }
    }
    
    public class CustomAudioSource extends fm.liveswitch.AudioSource {
        public CustomAudioSource() {
            super(new fm.liveswitch.pcm.Format(48000, 2))
        }
    }
    
    public class CustomVideoSource extends fm.liveswitch.VideoSource {
        public CustomVideoSource() {
            super(fm.liveswitch.VideoFormat.getRgb())
        }
    }
    
    public class CustomAudioSource : FMLiveSwitchAudioSource {
        override init() {
            super.init(outputFormat: FMLiveSwitchPcmFormat(clockRate: 48000, channelCount: 2), frameDuration: 20)
        }
    }
    
    public class CustomVideoSource : FMLiveSwitchVideoSource {
        override init() {
            super.init(outputFormat: FMLiveSwitchVideoFormat.rgb())
        }
    }
    

    We recommend extending the CameraSourceBase or ScreenSourceBase classes instead of the VideoSource class. Using these classes allows the pipeline to optimize its default configuration for those specific use cases as well as signal the media type to other clients.

    • Extending CameraSourceBase requires an additional constructor parameter of type VideoConfig to indicate the target configuration: size and frame-rate. The DoStart implementation must then set the Config property to the actual selected camera configuration.
    • Extending ScreenSourceBase requires an additional constructor parameter of type ScreenConfig to indicate the target configuration: origin, region, and frame-rate. The DoStart implementation must then set the Config property of the actual selected screen configuration.

    Next, override the Label property. This is an accessor that returns a string that identifies the type of source. The value you provide here is only for diagnostic purposes and doesn't affect the output of an audio or video source.

    • CSharp
    • Android
    • iOS
    public class CustomAudioSource : FM.LiveSwitch.AudioSource
    {
        public override string Label => "CustomAudioSource";
    }
    
    public class CustomVideoSource : FM.LiveSwitch.VideoSource
    {
        public override string Label => "CustomVideoSource";
    }
    
    public class CustomAudioSource extends fm.liveswitch.AudioSource {
        @Override
        public String getLabel() {
            return "CustomAudioSource";
        }
    }
    
    public class CustomVideoSource extends fm.liveswitch.VideoSource {
        @Override
        public String getLabel() {
            return "CustomVideoSource";
        }
    }
    
    public class CustomAudioSource : FMLiveSwitchAudioSource {
        override func label() -> String {
            return "CustomAudioSource"
        }
    }
    
    public class CustomVideoSource : FMLiveSwitchVideoSource {
        override func label() -> String {
            return "CustomVideoSource"
        }
    }
    

    Finally, you must implement the DoStart and DoStop methods. Usually, these methods follow one of two patterns:

    • Manage an event handler on a interface that captures audio and video data
    • Manage a separate thread that runs in the background, which generates audio and video data.

    With both patterns, the source must invoke the RaiseFrame method when data is available. RaiseFrame is a protected method that signals to components in the media stack that new data is available.

    Note that the DoStart and DoStop methods are asynchronous and return an FM.LiveSwitch.Future. For the sake of simplicity, these examples are synchronous and resolve the promise immediately. In practice, your implementation is likely to be more complex.

    Capture Audio

    The following code examples show how to capture audio using the event-based pattern. These code examples use a fictional AudioCaptureObject class created for demonstration purposes.

    Raise Audio Frame

    To capture audio, first create an instance of the AudioCaptureObject, and then add an event handler that is raised whenever new audio data is available. The event handler has the following three parameters:

    • data: A byte array that contains raw audio data over a period of time.

    • duration: The time in milliseconds that the data parameter represents. You must calculate the duration based on your implementation. If the audio source is raising uncompressed (PCM) audio data, you can infer the duration directly from the length of data and the clock-rate and channel-count of the output's audio format. You can use SoundUtility, which includes a number of static helper methods, to perform this calculation. For example:

      var duration = SoundUtility.CalculateDuration(data.Length, OutputFormat.Config);
      
    • systemTimestamp: A timestamp measured in ticks. 10,000 ticks are equivalent to 1 millisecond. This timestamp must come from the system clock used by the VideoSource object that the AudioSource object is going to synchronize with. To synchronize audio with a video source, like for lip syncing, set the AudioSource object's OutputSynchronizable property to true in the constructor.

    Note

    By default, the OutputSynchronizable property of a VideoSource object is set to true. VideoSource uses ManagedStopwatch.GetTimestamp() to set SystemTimestamp values automatically on raised VideoFrame instances.
    Depending on the platform, ManagedStopwatch gets timestamps from the following places:

    • C#: System.Diagnostics.Stopwatch.GetTimestamp(). Uses Stopwatch.Frequency to convert the timestamp to normalized ticks.
    • Android: System.nanoTime(). Converted to ticks where 1 tick is 100 nanoseconds.
    • iOS: mach_absolute_time(). Converted to ticks using this function where 1 tick is 100 nanoseconds.

    You can raise an audio frame with these three parameters as follows:

    1. Wrap the raw audio data in an instance of FM.LiveSwitch.DataBuffer.

      Important

      LiveSwitch only supports signed 16-bit (short) and little-endian for raising uncompressed (PCM) audio data. Other PCM formats, like 32-bit (floating point) and big-endian, must be converted to signed 16-bit (short) and little-endian in the source.

    2. Wrap the data buffer in an instance of FM.LiveSwitch.AudioBuffer, which also requires you to specify the FM.LiveSwitch.AudioFormat of the audio data. You can use the OutputFormat property of your audio source to retrieve this.

    3. Wrap the audio buffer in an instance of FM.LiveSwitch.AudioFrame and provide the audio duration.

    4. Set the AudioFrame's SystemTimestamp.

    5. Invoke RaiseFrame on this new AudioFrame instance.

    • CSharp
    • Android
    • iOS
    public class CustomAudioSource : FM.LiveSwitch.AudioSource
    {
        private AudioCaptureObject _Capture;
    
        protected override FM.LiveSwitch.Future<object> DoStart()
        {
            var promise = new FM.LiveSwitch.Promise<object>();
    
            _Capture = new AudioCaptureObject();
            _Capture.AudioDataAvailable += (int duration, byte[] data, long systemTimestamp) =>
            {
                // This sets the `littleEndian` flag to true.
                var dataBuffer = FM.LiveSwitch.DataBuffer.Wrap(data, true);
                var audioBuffer = new FM.LiveSwitch.AudioBuffer(dataBuffer, this.OutputFormat);
                var audioFrame = new FM.LiveSwitch.AudioFrame(duration, audioBuffer);
                audioFrame.SystemTimestamp = systemTimestamp;
    
                this.RaiseFrame(audioFrame);
            });
    
            promise.Resolve(null);
            return promise;
        }
    }
    
    public class CustomAudioSource extends fm.liveswitch.AudioSource {
        private AudioCaptureObject _capture;
    
        @Override
        public fm.liveswitch.Future<Object> doStart() {
            fm.liveswitch.Promise<Object> promise = new fm.liveswitch.Promise<Object>();
    
            _capture = new AudioCaptureObject();
            _capture.addOnAudioDataAvailable((double duration, byte[] data, long systemTimestamp) -> {
                // This sets the `littleEndian` flag to true.
                fm.liveswitch.DataBuffer dataBuffer = fm.liveswitch.DataBuffer.wrap(data, true);
                fm.liveswitch.AudioBuffer audioBuffer = new fm.liveswitch.AudioBuffer(dataBuffer, this.getOutputFormat());
                fm.liveswitch.AudioFrame audioFrame = new fm.liveswitch.AudioFrame(duration, audioBuffer);
                audioFrame.setSystemTimestamp(systemTimestamp);
    
                this.raiseFrame(audioFrame);
            });
    
            promise.resolve(null);
            return promise;
        }
    }
    
    public class CustomAudioSource : FMLiveSwitchAudioSource {
        var _capture:AudioCaptureObject
    
        public override func doStart() -> FMLiveSwitchFuture! {
            let promise = FMLiveSwitchPromise()
    
            _capture = AudioCaptureObject()
            _capture.addOnAudioDataAvailable { (Double duration, NSData data, Int64 systemTimestamp) in
                var dataBuffer:FMLiveSwitchDataBuffer = FMLiveSwitchDataBuffer.wrap(data: data)
                var audioBuffer:FMLiveSwitchAudioBuffer = FMLiveSwitchAudioBuffer(dataBuffer: dataBuffer, format: self.outputFormat())
                var audioFrame:FMLiveSwitchAudioFrame = FMLiveSwitchAudioFrame(duration: duration, buffer: audioBuffer)
                audioFrame.setSystemTimestamp(systemTimestamp)
    
                self.raiseFrame(audioFrame)
            }
    
            promise?.resolve(withResult: nil)
            return promise
        }
    }
    

    Stop Audio Source

    To stop an audio source instance, you can either destroy any capture interface you were using or remove any event handlers.

    • CSharp
    • Android
    • iOS
    public class CustomAudioSource : FM.LiveSwitch.AudioSource
    {
        protected override Future<object> DoStop()
        {
            var promise = new FM.LiveSwitch.Promise<object>();
    
            _Capture.Destroy();
            _Capture = null;
    
            promise.Resolve(null);
            return promise;
        }
    }
    
    public class CustomAudioSource extends fm.liveswitch.AudioSource {
        @Override
        public fm.liveswitch.Future<Object> doStop() {
            fm.liveswitch.Promise<Object> promise = new fm.liveswitch.Promise<Object>();
    
            _capture.destroy();
            _capture = null;
    
            promise.resolve(null);
            return promise;
        }
    }
    
    public class CustomAudioSource : FMLiveSwitchAudioSource {
        public override func doStop() -> FMLiveSwitchFuture! {
            let promise = FMLiveSwitchPromise()
            promise?.resolve(withResult: nil)
            return promise
        }
    
        public override func label() -> String! {
            return "My Custom Audio Source"
        }
    }
    

    Capture Video

    The following examples demonstrate how to capture video using the event-based pattern and stop video to release resources. A fictional VideoCaptureObject class is used for demo purpose.

    Raise Video Frame

    You need to specify the width and height of the video frames.

    To raise a video frame:

    1. Wrap the raw video data in an instance of FM.LiveSwitch.DataBuffer.
    2. Wrap the data buffer in an instance of FM.LiveSwitch.VideoBuffer. It requires you to specify the FM.LiveSwitch.VideoFormat of the data as well as the width and height of the video. You can use the OutputFormat property of your video source.
    3. Set the stride values describing the video data in the data buffer.
    4. Wrap the video buffer in an instance of FM.LiveSwitch.VideoFrame and invoke RaiseFrame.
    • CSharp
    • Android
    • iOS
    public class CustomVideoSource : FM.LiveSwitch.VideoSource
    {
        private VideoCaptureObject _Capture;
    
        protected override FM.LiveSwitch.Future<object> DoStart()
        {
            var promise = new FM.LiveSwitch.Promise<object>();
    
            _Capture = new VideoCaptureObject();
            _Capture.VideoDataAvailable += (int width, int height, byte[] data) =>
            {
                var dataBuffer = FM.LiveSwitch.DataBuffer.Wrap(data);
                var videoBuffer = new FM.LiveSwitch.VideoBuffer(width, height, dataBuffer, this.OutputFormat);
                videoBuffer.setStrides(new int[] { yPlaneStride, uPlaneStride, vPlaneStride });
                var videoFrame = new FM.LiveSwitch.VideoFrame(videoBuffer);
    
                this.RaiseFrame(videoFrame);
            });
    
            promise.Resolve(null);
            return promise;
        }
    }
    
    public class CustomVideoSource extends fm.liveswitch.VideoSource {
        private VideoCaptureObject _capture;
    
        @Override
        public fm.liveswitch.Future<Object> doStart() {
            fm.liveswitch.Promise<Object> promise = new fm.liveswitch.Promise<Object>();
    
            _capture = new VideoCaptureObject();
            _capture.addOnVideoDataAvailable((int width, int height, byte[] data) -> {
                fm.liveswitch.DataBuffer dataBuffer = fm.liveswitch.DataBuffer.wrap(data);
                fm.liveswitch.VideoBuffer videoBuffer = new fm.liveswitch.VideoBuffer(width, height, dataBuffer, this.getOutputFormat());
                videoBuffer.setStrides(new int[] { yPlaneStride, uPlaneStride, vPlaneStride });
                fm.liveswitch.VideoFrame videoFrame = new fm.liveswitch.VideoFrame(duration, videoBuffer);
    
                this.raiseFrame(videoFrame);
            });
    
            promise.resolve(null);
            return promise;
        }
    }
    
    public class CustomVideoSource : FMLiveSwitchVideoSource {
    
        public override func doStart() -> FMLiveSwitchFuture! {
            let promise = FMLiveSwitchPromise()
    
            _capture = VideoCaptureObject()
            _capture.addOnVideoDataAvailable() { (Int width, Int height, NSData data) in
                var dataBuffer:FMLiveSwitchDataBuffer = FMLiveSwitchDataBuffer.wrap(data: data)
                var videoBuffer:FMLiveSwitchVideoBuffer = FMLiveSwitchVideoBuffer(width: width, height: height, format: self.outputFormat())
                videoBuffer.setStrides(NSMutableArray(arrayLiteral: [yPlaneStride, uPlaneStride, vPlaneStride]))
                var videoFrame:FMLiveSwitchVideoFrame = FMLiveSwitchVideoFrame(duration: duration, buffer: videoBuffer)
                self.raiseFrame(videoFrame)
            }
    
            promise?.resolve(withResult: nil)
            return promise
        }
    }
    

    Stop Video Source

    To stop a video source, simply release or destroy any resources you were using.

    • CSharp
    • Android
    • iOS
    public class CustomVideoSource : FM.LiveSwitch.VideoSource
    {
        protected override Future<object> DoStop()
        {
            var promise = new FM.LiveSwitch.Promise<object>();
    
            _Capture.Destroy();
            _Capture = null;
    
            promise.Resolve(null);
            return promise;
        }
    }
    
    public class CustomVideoSource extends fm.liveswitch.VideoSource {
        @Override
        public fm.liveswitch.Future<Object> doStop() {
            fm.liveswitch.Promise<Object> promise = new fm.liveswitch.Promise<Object>();
    
            _capture.destroy();
            _capture = null;
    
            promise.resolve(null);
            return promise;
        }
    }
    
    public class CustomVideoSource : FMLiveSwitchVideoSource {
        public override func doStop() -> FMLiveSwitchFuture! {
            let promise = FMLiveSwitchPromise()
            promise?.resolve(withResult: nil)
            return promise
        }
    
        public override func label() -> String! {
            return "My Custom Video Source"
        }
    }
    

    Raise Frames

    You should raise audio frames as soon as the audio frames are accessed from the underlying device or API. LiveSwitch automatically handles any gap in the audio streams.

    You should raise video frames as soon as the video frames are accessed from the underlying device or API. LiveSwitch automatically handles missed video frames due to congestion or device load. If you implement your own queue of video frames, we recommend to discard rather than increasing the queue length. A frame-rate reduction is generally preferred to a delivery delay.

    Custom Sinks

    Like you did when creating a custom source, to create a custom audio or video sink, first extends either the AudioSink or VideoSink class. The sink takes on the output format of any source or pipe that is attached to it. You only need to specify an AudioFormat or VideoFormat if you need to restrict the input format.

    The following code is a simple example of how to create a custom sink.

    • CSharp
    • Android
    • iOS
    public class CustomAudioSink : FM.LiveSwitch.AudioSink
    {
        public CustomAudioSink()
            : base(new FM.LiveSwitch.Pcm.Format(48000, 2))
        {
        }
    }
    
    public class CustomVideoSink : FM.LiveSwitch.VideoSink
    {
        public CustomVideoSink()
            : base(FM.LiveSwitch.VideoFormat.RGB)
        {
        }
    }
    
    public class CustomAudioSink extends fm.liveswitch.AudioSink {
        public CustomAudioSink() {
            super(new fm.liveswitch.pcm.Format(48000, 2))
        }
    }
    
    public class CustomVideoSink extends fm.liveswitch.VideoSink {
        public CustomVideoSink() {
            super(fm.liveswitch.VideoFormat.getRgb())
        }
    }
    
    public class CustomAudioSink : FMLiveSwitchAudioSink {
        override init() {
            super.init(outputFormat: FMLiveSwitchPcmFormat(clockRate: 48000, channelCount: 2))
        }
    }
    
    public class CustomVideoSink : FMLiveSwitchVideoSink {
        override init() {
            super.init(outputFormat: FMLiveSwitchVideoFormat.rgb())
        }
    }
    

    Sinks have a Label property that is used for diagnosing. It doesn't affect what goes into your sinks.

    • CSharp
    • Android
    • iOS
    public class CustomAudioSink : FM.LiveSwitch.AudioSink
    {
        public override string Label => "CustomAudioSink";
    }
    
    public class CustomVideoSink : FM.LiveSwitch.VideoSink
    {
        public override string Label => "CustomVideoSink";
    }
    
    public class CustomAudioSink extends fm.liveswitch.AudioSink {
        @Override
        public String getLabel() {
            return "CustomAudioSink";
        }
    }
    
    public class CustomVideoSink extends fm.liveswitch.VideoSink {
        @Override
        public String getLabel() {
            return "CustomVideoSink";
        }
    }
    
    public class CustomAudioSink : FMLiveSwitchAudioSink {
        override func label() -> String {
            return "CustomAudioSink"
        }
    }
    
    public class CustomVideoSink : FMLiveSwitchVideoSink {
        override func label() -> String {
            return "CustomVideoSink"
        }
    }
    

    Unlike source, the implementation for sinks has no DoStart or DoStop methods because sinks don't follow a "start/stop" pattern. Instead, whenever an audio or video frame is available, the sink invokes its DoProcessFrame method. When a sink is instantiated, it is assumed to be ready to receive frames.

    Tip

    Sinks can lazy-initialize, such as initialize themselves in the first DoProcessFrame invocation as opposed to in the constructor.

    The last method that sinks must implement is DoDestroy, which cleans up any resources that are still in use. The DoProcessFrame and DoDestroy methods for a sink are synchronous, and don't return an FM.LiveSwitch.Promise.

    Note

    LiveSwitch guarantees that DoProcessFrame is only called once at a time and is thread-safe. LiveSwitch also guarantees that DoDestroy is never called concurrently with DoProcessFrame.

    Render Audio

    To demonstrate how to play received audio data, the example code below uses a fictional AudioRenderObject class and abstracts away many of the details of audio playback. In your implementation, you must deal with the upsampling and downsampling of audio.

    There are many properties that are accessible from the AudioFrame and AudioBuffer classes. This example focuses on retrieving the duration and data properties. Assume the AudioRenderObject has a PlayAudio method that takes a duration parameter and a data parameter. You must retrieve these values from either the audio buffer or the audio frame:

    1. Retrieve the duration of the audio frame by accessing the Duration property of the AudioFrame parameter.
    2. Retrieve the DataBuffer property of Audiobuffer, and then retrieve the raw audio data through the Data property of this DataBuffer instance.
    3. Pass these values into the AudioRenderObject (or any interface you are using for this sink.)
    • CSharp
    • Android
    • iOS
    public class CustomAudioSink : FM.LiveSwitch.AudioSink
    {
        private AudioRenderObject _Render = new AudioRenderObject();
    
        public override void DoProcessFrame(FM.LiveSwitch.AudioFrame frame, FM.LiveSwitch.AudioBuffer buffer)
        {
            var duration = frame.Duration;
    
            var dataBuffer = buffer.DataBuffer;
            var data = dataBuffer.Data;
    
            _Render.PlayAudio(duration, data);
        }
    }
    
    public class CustomAudioSink extends fm.liveswitch.AudioSink {
        private AudioRenderObject _render = new AudioRenderObject();
    
        @Override
        public void doProcessFrame(fm.liveswitch.AudioFrame frame, fm.liveswitch.AudioBuffer buffer) {
            double duration = frame.Duration;
    
            fm.liveswitch.DataBuffer dataBuffer = buffer.getDataBuffer();
            byte[] data = dataBuffer.getData();
    
            _render.playAudio(duration, data);
        }
    }
    
    public class CustomAudioSink : FMLiveSwitchAudioSink {
        var _capture:AudioRenderObject
    
        func doProcessFrame(frame: FMLiveSwitchAudioFrame, buffer: FMLiveSwitchAudioBuffer) {
            var duration:Double = frame.duration()
    
            var buffer:FMLiveSwitchDataBuffer = buffer.dataBuffer()
            var data:NSData = dataBuffer.data()
    
            _render.playAudio(duration: duration, data: data)
        }
    }
    

    Render Video

    Rendering video is similar to rendering audio. A fictional VideoRenderObject class is used for demo purpose.

    The example below demonstrates how to retrieve the width and height of video data in the DoProcessFrame method, and the raw video data with the following steps:

    1. Retrieve the Width and Height properties of the video buffer parameter.
    2. Retrieve the DataBuffer property of the same parameter, and then retrieve the raw video data through the Data property of the DataBuffer instance.
    3. Pass these values into the VideoRenderObject (or whatever interface you are using for this sink.)
    4. Finally, Implement DoDestroy to release any resource you used.
    • CSharp
    • Android
    • iOS
    public class CustomVideoSink : FM.LiveSwitch.VideoSink
    {
        private VideoRenderObject _Render = new VideoRenderObject();
    
        public override void DoProcessFrame(FM.LiveSwitch.VideoFrame frame, FM.LiveSwitch.VideoBuffer buffer)
        {
            var width = buffer.Width;
            var height = buffer.Height;
    
            var dataBuffer = buffer.DataBuffer;
            var data = dataBuffer.Data;
    
            _Render.PlayVideo(width, height, data);
        }
    
        public override void DoDestroy()
        {
            _Render.Destroy();
            _Render = null;
        }
    }
    
    public class CustomVideoSink extends fm.liveswitch.VideoSink {
        private VideoRenderObject _render = new VideoRenderObject();
    
        @Override
        public void doProcessFrame(fm.liveswitch.VideoFrame frame, fm.liveswitch.VideoBuffer buffer) {
            int width = buffer.getWidth();
            int height = buffer.geHeight();
    
            fm.liveswitch.DataBuffer dataBuffer = buffer.getDataBuffer();
            byte[] data = dataBuffer.getData();
    
            _render.playVideo(width, height, data);
        }
    
        @Override
        public void doDestroy() {
            _render.destroy();
            _render = null;
        }
    }
    
    public class CustomVideoSink : FMLiveSwitchVideoSink {
        func doProcessFrame(frame: FMLiveSwitchVideoFrame, buffer: FMLiveSwitchVideoBuffer) {
            var width:Int = buffer.width()
            var height:Int = buffer.height()
    
            var dataBuffer:FMLiveSwitchDataBuffer = buffer.dataBuffer()
            var data:NSData = dataBuffer.data()
    
            _render.play(width: width, height: height, data: data)
        }
    
        func doDestroy() {
            _render.destroy()
            _render = nil
        }
    }
    
    In This Article
    Back to top Copyright © LiveSwitch Inc. All Rights Reserved.Documentation for LiveSwitch Version 1.24.6