Code release: python interface for mambo


I have been working on a python interface for the mambo and it is now publicly available (alpha release). Enjoy!


This is amazing!
Thanks for sharing


Hello captain saavik

Did it possible with the python interface to pilot with autonomy flying two mambo in same time ? Or only one …

I need to make fly in autonomy flying two drone in same time . Please tell me

Thank you :kiss:


I confirmed tonight that it will fly two (and potentially three though my third one is getting a firmware update right now). The only issue I’m seeing is that the BLE is disconnecting a lot more often. I have been intending to add in more reliability try/except statements to the BLE so this will give me a chance to do that. I’ll release an update once that is fixed.


I’ve mostly got it debugged and I’ll get a code release out shortly. All 3 ran out of batteries at the same time so I ran out of testing (but it worked the last few tests, I was just adding some extra reliability).


Code updated to handle this. The only issue is that you have to control each one from a separate python file since bluepy has some internal limitations on connecting to more than one at once.


Hello Captain Saavik,

Will you consider about build GUI design on your current interface in coming? Thank you.


No, there will not be a GUI. We are using it to teach coding skills and we are teaching the kids to use python. That means writing text-based code. If you want a GUI, use the app that parrot already has for flying :slight_smile:



I recently got a Parrot Mambo and downloaded @CaptainSaavik 's SDK for Pyparrot. I am using a Raspberry Pi 3 with Raspian, and I compiled OpenCV on the Raspberry Pi. I could successfully run and . However, when I try to run I am unable to see the video feed. Instead, I get the following error message:

(env) piATraspberrypi:~/Parrot/pyparrot-master/examples $ python

trying to connect to mambo now
Service Mambo_577269._arsdk-090b._udp.local. added, service info: ServiceInfo(type=’_arsdk-090b._udp.local.’, name=‘Mambo_577269._arsdk-090b._udp.local.’, address=b’\xc0\xa8c\x03’, port=44444, weight=0, priority=0, server=‘Delos.local.’, properties={b’{“device_id”:“PI040408AA7H577269”,“version”:“3.0.17”,“rc”:0}’: False})
{“controller_name”: “pyparrot”, “d2c_port”: 43210, “controller_type”: “computer”}
{‘status’: 0, ‘c2d_port’: 6000, ‘c2d_user_port’: 21, ‘c2d_update_port’: 51}
c2d_port is 6000
starting listening at
Success in setting up the wifi network to the drone!
connected: True
Could not find sensor in list - ignoring for now. Packet info below.
(2, 25, 3)
Could not find sensor in list - ignoring for now. Packet info below.
(2, 25, 3)
Could not find sensor in list - ignoring for now. Packet info below.
(2, 25, 5)
Preparing to open vision
opening the camera

[h264 @ 0x249dda0] non-existing PPS 0 referenced
[h264 @ 0x249dda0] non-existing PPS 0 referenced
[h264 @ 0x249dda0] decode_slice_header error
(same three lines many many times)

[h264 @ 0x249dda0] no frame!
[h264 @ 0x249dda0] Too many slices, increase MAX_SLICES and recompile
[h264 @ 0x249dda0] Too many slices, increase MAX_SLICES and recompile


The error of too many slices will not go away on the PI. My directions do say that vision doesn’t work well on the PI as it simply can’t keep up with the 30 fps. If there was a way to downscale the FPS, it would be ok but the camera sends at 30 FPS and the open cv receives it at 30 FPS and you can’t drop frames nor set the camera rate. I had to switch to using my laptop for vision.


have you tried just skipping every other frame? This should get you into the 15fps range.


@synman opencv doesn’t work that way. It puts all the frames into a buffer and you have to grab them in order. The pi isn’t fast enough in any of my tests to handle it. My laptop works fine.


Simple idea, but actually this won’t work:

H.264 (the video codec used by the Mambo camera) uses difference between frames to reduce the stream size on network. If you skip frames in the stream, your decoder won’t be able to properly decode the remaining frames. You will either get a decoder error, or a “gray-ish” frame. In either case, it won’t be suitable for image processing :wink:

Reducing the frame rate can only be done on the encoder size (i.e. drop the frame before encoding).


It definitely works that way if you have a means of controlling the intake of frame buffers. But as Nicolas pointed out you can’t get away with this on the Mambo RTSP stream.

I have done it successfully with the Bebop stream.

Something else you may want to consider is backing openCV with a simple bitmap in a control loop that samples every x milliseconds.


while (!interrupted)
     mat = surface.saveAsBitmap()
     /// do your opencv processing


My solution was just to do it on a higher powered machine other than the PI. That’s all the OP was asking about anyway. I found no way to make that error of “Too many slices, increase MAX_SLICES and recompile” to disappear. I recompiled FFMPEG (which is where the error comes from) and I gave it a LOT more slices (it starts at 16, I doubled and doubled and eventually quit at 8096). The Pi simply can’t keep up. Oh well!


We used to do this stuff on old androids and iPhone 3s with the original AR.Drone video stream and ffmpeg … perhaps it’s the resolution that is killing it.


Does your code works with bebop 2.0 and take the control from a PC? Thanks.


Yes it works on a Bebop 2. What do you mean take control from a PC? If you mean you run the code from the PC, yes, exactly.