Bebop and OpenCV


#1

Can OpenCV be use to navigate the bebop autonomously?

Is there anything I should be aware of at the start, that would save me time in getting this working?


#2

Hi, I’m right now using OpenCV to autonomoustly control Bebop Drone. For sure that you will need video stream from Bebop to deal with real time application, in my case I integrated OpenCV into BebopDecodeStream sample and it worked fine. The only thing bother me is the low frame quality (480p) that reduced the image processing ability.
If you have any questions, feel free to ask!


#3

Hello @darknet!
If you don’t mind, could you explain how you converted the decoded frame to image (i.e. jpg) so that the frame can be used for OpenCV?
Thank you!


#4

Why is the stream 480p? I thought the BeBop could do 1080p. Would it be possible to stream the video to another device instead and have it do the OpenCv processing so that we could do above 480p?


#5

The stream is 480p to let some place on the bandwidth for all the commands. Bebop records video in 1080p, video you can download and watch after the flight.
Right now, you connect your device to the Bebop by Wifi and only one device can be linked to the Bebop. So, to me, you can’t send the stream to somewhere else. I think it is planned to add a 4G chip on the Bebop which will allow you to connect the Bebop to your device in Wifi and to the Cloud by 4G as the same time. Someone from Parrot can backup or correct this information :slight_smile:


Image/Stream Processing
Connection lost of BebobDroneReceiveStream
#6

I coundnt find how to reply your email, so i reply here i used ble signal to calculate distance between drone and attached moblie phone(i made my moblie as a beacon) if you any other details, feel free to ask anything


#7

Hello @dkkim930122!
Sorry for my late! In the BebopDecodeStream sample every frame already converted to YUV420p type, I just do one more step which is converting from YUV420p to RGB by cvtColor() OpenCV. The converted image should be saved to a global Matrix for frequently loading frames. There also should be a guard function to make sure that you will not load the picture during converting.
If you need the detail code, contact to my email: thaison91.hust@gmail.com and I will send the source code to you!
Best regards,
Le Thai Son.


#8

Could you share code on how you are converting from YUV420p to RGB ?


#9

Hey @darknet , I think everyone would be happy if you share your code here or on Github :smile:


#10

Has anyone cross-complied the ARSDKBuildUtils and OpenCV to execute locally on the Bebop drone?


#11

Is there seriously nobody that want’s to share the code to convert to YUV420p?


#12

It’s been around for a while as part of the ROS driver for Parrot Bebop.


#13

Hi @Kwon,

Can you share some details about how you managed to do this? Would be very interested in replicating. If you’re willing to share source code that would be amazing.


#14

hello @darknet can you please share a detailed code with me. Right now I’m trying to develop a face detection app with bebop 2. I’m bit confused how the ARFrame can be defined in Mat format. It will be a great help if you can share your code since I’m pretty new to android development. thanks in advance :slight_smile:


#15

Hi erangi93!
I have same job with you and I’m new to android too.
So if you have any solution(i.e. codes), could you share with me?


#16

This is a C++ code to convert stream video to cvMat:

extern "C"{
#include <curses.h>
#include <libswscale/swscale.h>
#include <libavcodec/avcodec.h>
#include <libavformat/avformat.h>
#include <libavutil/avutil.h>
#include <libARSAL/ARSAL.h>
#include <libARController/ARController.h>
#include <libARDiscovery/ARDiscovery.h>
}


AVCodec * codec;
AVCodecContext* codec_context;
AVFormatContext *format_context;
AVPacket packet;

eARCONTROLLER_ERROR didReceiveFrameCallback (ARCONTROLLER_Frame_t *frame, void *customData)
{
    AVFrame *picture;
    AVFrame *rgb_picture;
    picture = avcodec_alloc_frame();
    rgb_picture = avcodec_alloc_frame();
    AVPixelFormat  pFormat = AV_PIX_FMT_BGR24;
    int numBytes;
    if (!picture)
    {
        fprintf(stderr, "Could not allocate video frame\n");
        exit(1);
    }


    codec_context->width=856;
    codec_context->height=480;
    numBytes = avpicture_get_size(pFormat,codec_context->width,codec_context->height);

    packet.data =frame->data;//your frame data
    packet.size = frame->used;//your frame data size
    int got_frame = 0;
    uint8_t *buffer;
    buffer = (uint8_t *) av_malloc(numBytes*sizeof(uint8_t));
    avpicture_fill((AVPicture *) rgb_picture,buffer,pFormat,codec_context->width,codec_context->height);

    int len = avcodec_decode_video2(codec_context, picture, &got_frame, &packet);
    if (len >= 0 && got_frame)
    {
        struct SwsContext * img_convert_ctx;
        img_convert_ctx = sws_getContext(codec_context->width, codec_context->height, codec_context->pix_fmt,   codec_context->width, codec_context->height, AV_PIX_FMT_BGR24, SWS_BICUBIC, NULL, NULL,NULL);
        sws_scale(img_convert_ctx, (picture)->data, (picture)->linesize, 0, codec_context->height, (rgb_picture)->data, (rgb_picture)->linesize);

        cv::Mat img(picture->height,picture->width,CV_8UC3,rgb_picture->data[0]); //dst->data[0]);
        cv::imshow("display",img);
        cv::waitKey(1);


        av_free(picture);
        av_free(rgb_picture);
        sws_freeContext(img_convert_ctx);
    }
    return ARCONTROLLER_OK;
}

int main (int argc, char **argv)
{

    /*
     * video convert
     */
    av_register_all();
    avcodec_register_all();
    avformat_network_init();
    format_context=avformat_alloc_context();
    av_init_packet(&packet);
    codec = avcodec_find_decoder(AV_CODEC_ID_H264);
    if (!codec)
    {
        fprintf(stderr, "Codec not found\n");
        exit(1);
    }

    codec_context=avcodec_alloc_context3(codec);
    if (!codec_context)
    {
        fprintf(stderr, "Could not allocate video codec context\n");
        exit(1);
    }

    avcodec_get_context_defaults3(codec_context, codec);
    codec_context->flags |= CODEC_FLAG_LOW_DELAY;
    codec_context->flags2 |= CODEC_FLAG2_CHUNKS;
    codec_context->thread_count = 4;
    codec_context->thread_type = FF_THREAD_SLICE;
    codec_context->strict_std_compliance = FF_COMPLIANCE_EXPERIMENTAL;
    codec_context->pix_fmt =  AV_PIX_FMT_YUV420P;
    codec_context->skip_frame = AVDISCARD_DEFAULT;
    codec_context->error_concealment = FF_EC_GUESS_MVS | FF_EC_DEBLOCK;
    codec_context->skip_loop_filter = AVDISCARD_DEFAULT;
    codec_context->workaround_bugs = FF_BUG_AUTODETECT;
    codec_context->codec_type = AVMEDIA_TYPE_VIDEO;
    codec_context->codec_id = AV_CODEC_ID_H264;
    codec_context->skip_idct = AVDISCARD_DEFAULT;


    if (avcodec_open2(codec_context, codec, nullptr) < 0)
    {
        fprintf(stderr, "Could not open codec\n");
        exit(1);
    }

    packet.pts = AV_NOPTS_VALUE;
    packet.dts = AV_NOPTS_VALUE;

    .
    .
    .
    .
    /*the reminder code like sdk sample*/
}

Edited by Djavan: Better formatted code snippet


#17

Hello,

Thanks for your code @hosh0425, but could you tell me what do you put inside decoderConfigCallback?
Also, your code has a huge memory leak : you should put inside didReceiveFrameCallback the following line : “av_free(buffer);” next to “av_free(picture);”.


#18

@hosh0425, thanks for the code but I get an error:

error: unknown type name ‘AVPixelFormat’ AVPixelFormat pFormat = AV_PIX_FMT_BGR24;

Any idea why that could be?

Thanks

Andy