Inquiry about Bebop/Bebop 2 for project purposes


I’m a currently a senior studying computer engineering and our final year project consists of face detection and recognition using a drone and I would to use the drone processing unit to be able to process the image and then send it to another station ( computer).
Is possible to modify Bebop or Bebop 2 's source code to be able to process the images directly before sending them and are we able to install drivers and python libraries ?
If not what other drones do you recommend?


oh hi asg14, I don’t know the answer to your question, but, I am curious to know what you mean by “process the image” … I think that some of the higher class drones, like the bebop2, can produce a steady stream of images, perhaps 16 or more per second, and do I understand you correctly that you’d like to insert some other process on the drone’s operating system that would read the image and take some action?

while I don’t know, I do think its an interesting question. I think most people write some form of a controller using the sdk and “process” the image off-line - that is - off of the device, and then send the appropriate commands to the device - whose processors, we’d assume, are well utilized for all of its varied instructions …

but, generally speaking, it might be nice to be able to insert a module into the device’s command loop, and such a concept should be possible …


we are using for face recognition a deep convolution neural network that would extract features from a certain image. we want to implement the whole model on the drone because we want the best real time performance we can get and thus need to process the image directly on the drone to extract these certain features.


well - not to be funny - but, why do you think that implementing the entire process on the device would bring the “best real time performance” ? I mean, perhaps you think its obvious, but, it doesn’t seem so obvious to me … I mean, I don’t even know what sort of processor or what sort of ram or other hardware those little devices have … what if, for instance, reading the image feed and then processing offline was most efficient ? what if having stationary cameras in a few key places in a particular zone or area proves best ?

perhaps, though, part of you project is to push the limits of the hardware of the device, rather than to get the job done, so to speak …

but, not withstanding, if there isn’t something already, I do think that there ought to be some sort of exposure to the internal process in a plug-in like fashion in some later versions of the device, and, I’m sure that if we push harder for things like this, it will be added, where possible … I do know that you can load a modified OS onto the device, but, I’m not sure how that is handled in terms of any warranty on the device …



You can not modify the Bebop firmware, so all data processing would have to be done on a separate application using the SDK. This is already done by some third party apps (like Bebop PRO), which seems to have reasonable results.

If you really need to have your software on board, your best bet on a Bebop/Bebop 2 would be to use an alternative open source autopilot instead of the official firmware.

Running your algorithms on-board might have some downsides: The autopilot is a time-critical piece of software, so adding a lot of CPU load can have a big effect on the flight, regardless of the autopilot used.



Hi, We have working on the same project but in our case we are using Android device cpu/gpu to do the processing. Here you can see the result we have achieved so far.

Test Video 1

Test Video 2