Regarding YUVSinkListener Protocol (iOS)

I’m using YUVSinkListener protocol from iOS GroundSDK, to fetch live video feed frames from Parrot Anafi 4K. My goal is to stream the Live Video feed from the drone to a cloud dashboard. To do so I need a CVPixelBuffer or a CVImageBuffer. But unfortunately the YUVSinkListener protocol does not return any of the mentioned objects. Instead it returns SDKCoreFrame. Using its member pointer “data” ( @property (nonatomic , assign , readonly ) const uint8_t * _Nullable data), I was able to generate a CGImage and eventually built a CVPixelBuffer. But the CGImage that I’m generating is getting clipped and is in black and white.

This is how I did it :

  1. I converted the pointer “data” to UnsafeMutablePointer of the type
  2. Next I converted it to UnsafeMutableBufferPointer of the type
  3. Next I generated an array of UInt8 type which contains 1388224 values.
  4. Using this array I created a CGImage and eventually a CVPixelBuffer.

Image: parrot.png - Google Drive

My Code :

class StreamListener: NSObject, YuvSinkListener {
var opentokConfig: OTConfiguration?
var dele: transfer!
var buffer: CVPixelBuffer!
var videoFrame = OTVideoFrame(format: OTVideoFormat(argbWithWidth: 0, height: 0))
var shouldInitiateVstream: Bool = true
var session: OTSession?
var publisher: OTPublisher?
var subscriber: OTSubscriber?
var videoCaptureConsumer: OTVideoCaptureConsumer?
var capturer: ScreenCapturer?

func frameReady(sink: StreamSink, frame: SdkCoreFrame) {

    let some = UnsafeMutablePointer<UInt8>(mutating:
    let after = UnsafeMutableBufferPointer<UInt8>(start: some, count: frame.len)
    if let temp = imageFromPixelValues(pixelValues: Array(after), width: 1280, height: 720, ptr: some!) {
        self.createBuffer(cgImage: temp)
   // dele.sendImg(img: UIImage(cgImage: temo!))


func didStart(sink: StreamSink) {
    print("Frame Started")


func didStop(sink: StreamSink) {
    print("Frame Stopped")
func imageFromPixelValues(pixelValues: [UInt8]?, width: Int, height: Int, ptr: UnsafeMutablePointer<UInt8>) -> CGImage? {
    var imageRef: CGImage?
    if pixelValues != nil {
        let colorSpaceRef = CGColorSpaceCreateDeviceGray()
        let bitsPerComponent = 8
        let bytesPerPixel = 1
        let bitsPerPixel = bytesPerPixel * bitsPerComponent
        let bytesPerRow =  bytesPerPixel * width
        let totalBytes = height * bytesPerRow

        let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
        let releaseMaskImagePixelData: CGDataProviderReleaseDataCallback = { (info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
                // N.B. 'CGDataProviderRelease' is unavailable: Core Foundation objects are automatically memory managed
        let providerRef = CGDataProvider(dataInfo: nil, data: ptr, size: totalBytes, releaseData: releaseMaskImagePixelData)
        imageRef = CGImage(width: width,
                           height: height,
                           bitsPerComponent: bitsPerComponent,
                           bitsPerPixel: bitsPerPixel,
                           bytesPerRow: bytesPerRow,
                           space: colorSpaceRef,
                           bitmapInfo: bitmapInfo,
                           provider: providerRef!,
                           decode: nil,
                           shouldInterpolate: true,
                           intent: CGColorRenderingIntent.defaultIntent)

    return imageRef
func createBuffer(cgImage: CGImage) {
    if buffer == nil {
        guard let frameFormat = self.videoFrame.format else { return }
        frameFormat.bytesPerRow.addObjects(from: [cgImage.width * 4])
        frameFormat.imageWidth = UInt32(cgImage.width)
        frameFormat.imageHeight = UInt32(cgImage.height)
        let frameSize = CGSize(width: cgImage.width, height: cgImage.height)
            let options: [String: Bool] = [
                kCVPixelBufferCGImageCompatibilityKey as String: false,
                kCVPixelBufferCGBitmapContextCompatibilityKey as String: false
        let status = CVPixelBufferCreate(kCFAllocatorDefault,
                                        options as CFDictionary,

        assert(status == kCVReturnSuccess && self.buffer != nil)
    } else {
        let frameSize = CGSize(width: cgImage.width, height: cgImage.height)
        CVPixelBufferLockBaseAddress(self.buffer!, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
            let pxdata = CVPixelBufferGetBaseAddress(self.buffer!)
            let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
            let context = CGContext(data: pxdata,
                                    width: Int(frameSize.width),
                                    height: Int(frameSize.height),
                                    bitsPerComponent: 8,
                                    bytesPerRow: CVPixelBufferGetBytesPerRow(self.buffer!),
                                    space: rgbColorSpace,
                                    bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)

            context?.draw(cgImage, in: CGRect(x: 0, y: 0, width: cgImage.width, height: cgImage.height))
        CVPixelBufferUnlockBaseAddress(self.buffer!, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
    if shouldInitiateVstream {
        self.shouldInitiateVstream = false
    } else {


Can you provide a tutorial on how to use the YUVSinkListener protocol or tell me where I’m going wrong with my code?
Shubham Kamdi | iOS Developer,