Face Detection in Bebop


I am trying to do face detection in bebop1
I am working with Android and opencv
Anyone have an idea of face detection by video stream from drone??
Thank you


If you have the video frames then you just need to use opencv to do the face detection. Doesn’t really have anything to do with bebop anymore. On iOS I got face detection working with opencv, but cpu got maxed out due to all the processing it takes to detect faces. The way it has to be done is to detect a face, then use the opencv object tracking to keep tracking it. I never got that working reliably.


How can I use video frames ? I don’t have any idea :frowning:

or If you can, can you share your source code?


I’m on iOS so I’m not sure how it would be done on Android. OpenCV is a monster and can be hard to get working examples on how things work. Here is my code for finding faces in images on iOS, it’s pretty simple but it took me a long time to find some sample code that I could get working.

NSString *path = [[NSBundle mainBundle] pathForResource:@"haarcascade_frontalface_alt" ofType:@"xml"]; const char *filePath = [path cStringUsingEncoding:NSUTF8StringEncoding]; CascadeClassifier faceDetector = cv::CascadeClassifier(filePath); Mat gray = [DCOpenCVHelper cvMatGrayFromUIImage:img]; std::vector<cv::Rect> faces; faceDetector.detectMultiScale(gray, faces, 1.15, 2, 0|CV_HAAR_SCALE_IMAGE, cv::Size(10, 10)); for (int i = 0; i < faces.size(); i++) { // Do stuff }


Hi, you can make use of the sample codes (BebopDroneDecodeStream) that fetch and decode frames, however, you need to keep in mind that the decoded frames are in YUV instead of RGB, and when using openCV processing takes time, so you might want to process frames in interval (lower frame rate) I had an issue when working with 30fps, so I’m processing every five frames.


I really appreciate your help, I understood a little bit what you said , so I think I have to know more information about it…
If you don’t mind , Can you share whole project??
I am working with Android, but I think iOS project also will be really helpful for me
Thank you so much for all helps.


Yeah I saw that BEbopDroneDecodeStream but it was bit different from Android…
so I keep looking that change byteBuffer to bitmap …
it is quite complicated works I think


@HDaoud hey having the same problem here.
I just compressed YUV byte array to jpeg and then to bitmap.but i found it really inefficient. After displaying 3 or 4 frames program is getting stuck. how did you change the fps btw? did it solve the issue for u?


You can change the frame rate of processing by using mod (% in cpp) so, you check for frameID % VALUE == 0 (let’s say value = 5) then every 5 frames you will process the frame for face detection or any other heavy work you want to do with the frame.


hey thanks @HDaoud

I did try decreasing fps.but the problem is still there. It’s getting stuck after 2 or 3 frames. I think the conversion function is not efficient for real time processing. this is what i did

public void onImageReceived(ARFrame frame) {
if (mIsEnabled) {
byte[] data = frame.getByteData();
Log.d(TAG, “onImageReceived:” + data.length);

        // synchronized (this){
        // Convert to JPG

        YuvImage yuvimage = new YuvImage(data, ImageFormat.NV21, 1196, 720, null);
        ByteArrayOutputStream baos = new ByteArrayOutputStream();
        yuvimage.compressToJpeg(new android.graphics.Rect(0, 0, 1196, 720), 80, baos);
        byte[] jdata = baos.toByteArray();

        // Convert to Bitmap
        bmp = BitmapFactory.decodeByteArray(jdata, 0, jdata.length);

        if (bmp == null) {
            Log.d(TAG, "onImageReceived: cant decode.");

        Mat image = new Mat();
        Utils.bitmapToMat(bmp, image);
        MatOfRect faces = new MatOfRect();
        mClassifier.detectMultiScale(image, faces);

do you have any other alternative of doing this conversion part? or any other clue?