Construct cv::Mat from the frame Bebop send to iOS (ARCONTROLLER_Frame_t)

Product: [Bebop2]
Product version: [X.X.X] (not sure)
SDK version: [3.11.0]
Use of libARController: [YES]
SDK platform: [iOS]
Reproductible with the official app: [Not Tried]


I am trying to apply an OpenCV algo onto the Bebop 2. Currently I am building based on the prebuild iOS SDK. For me, I want to change every frame into OpenCV format cv::Mat. Where should I convert the frame? Is it the ARCONTROLLER_Frame_t (inside BebopVC.m)to cv::Mat or the sampleBufferRef (inside BebopVideoView.m) to cv::Mat? How can I tell which one is the raw image data that I can convert?

@Djavan I remember you helped me last time. Thank you very much and I hope you can help me solving this problem. :slight_smile:

Best regards,

please read this thread: BebopVideoView to Mat


Sorry, really not familiar with Android and I don’t have an Android environment to try this out. I have successfully get the image data inside the callback. Do you have a suggestion on how to get the data out of the callback block? I want to pass the UIImage out of the callback and let it equal to a property of BebopVideoView called _uiImage.

void openCVImageDecompressionCallback(void * decompressionOutputRefCon, void *sourceFrameRefCon, OSStatus status, VTDecodeInfoFlags infoFlags, CVImageBufferRef imageBuffer, CMTime presentationTimeStamp, CMTime presentationDuration){
    //imageBuffer to UIImage
    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:imageBuffer];
    //NSLog(@"ciimage %f",ciImage.extent.size.height);
    //NSLog(@"ciimage %f",ciImage.extent.size.width);
    CIContext *temporaryContext = [CIContext contextWithOptions:nil];
    CGImageRef videoImage = [temporaryContext createCGImage:ciImage fromRect:CGRectMake(0, 0, CVPixelBufferGetWidth(imageBuffer), CVPixelBufferGetHeight(imageBuffer))];
    UIImage *uiImage = [[UIImage alloc] initWithCGImage:videoImage];
    //NSLog(@"uiimage %f",uiImage.size.height);
    //NSLog(@"uiimage %f",uiImage.size.width);
    //get the UIImage out and pass to the OpenCV function here 


NSLog lines are working correctly, but since the callback doesn’t have a userInfo or a self to pass in. Is there a way to get around it?

@Shironalove if I understand your question, you need the get the “self” pointer of your decompression/videoView class ?

I have done the following :

add this in the decompression callback :

VideoView *videoView = (__bridge VideoView *)decompressionOutputRefCon; // use [videoView someMethod…]

and in the configureDecoder callback :

VTDecompressionOutputCallbackRecord callBackRecord;
callBackRecord.decompressionOutputCallback = decompressionSessionDecodeFrameCallback;
callBackRecord.decompressionOutputRefCon = (__bridge void *)self;
osstatus = VTDecompressionSessionCreate(NULL, _formatDesc, NULL, NULL, &callBackRecord, &_decompressionSessionRef);