Python interface for Mambo and Bebop


#1

I have done a complete redesign of my initial python package for the Mambo (which used BLE only). I now have a python interface for both BLE and wifi AND it works on a Bebop 2 also. Because it is more general than just the Mambo, it has been renamed and is now available on GitHub here:


#2

Hi,

I excited to try out your python library for Bebop… but I had a frustrating experience with Katarina (https://github.com/robotika/katarina). It took me a long time to figure out that when updating the sensor data the results were about 2-3 seconds behind reality. e.g. I would update as quickly as possible and print to the screen, then move the bebop rapidly forward (by hand) but the speed would not jump up for a few seconds, then it would drop down again.

Does your implementation update accurately without significant lag?

Thanks

Andy


#3

The bebop stuff is very much in alpha mode so I’m not sure of the lag. I’m processing the sensor data as quickly as it is broadcast out from the bebop. I’m doing the same for the mambo and the lag seems minimal though I was seeing lags in the vision there (hopefully fixed once I move off the RPI for it). I think the easiest way to decide if there is a lag will be in the vision code and I don’t yet have vision processing on the bebop (intending to do that over the break as I’m buried in grading and end of semester right now).


#4

I hope that bebop vision works soon. I want to do with open cv.


#5

That’s my plan. Mambo vision is working through opencv. So far as I can tell, the bebop doesn’t stream using RTSP so I have to grab the data differently.


#6


Hello CaptainSaavik do you know what is this message?
error code is oserror( win error 10038)


#7

Development of the Bebop appears to be constantly evolving and its difficult to understand where it is at any one time. My understanding is its video has moved through three phases (note that this may be inaccurate… there appears to be no concise record or authority on it):

  1. Video frames delivered within the navdata messages (this is how the katarina project I mentioned did it). This no longer works.
  2. RTP stream (aka “stream v2”).I discovered this as I was trying to use katarina in windows and avoid the ARSDK. This is how it is done here: https://github.com/Parrot-Developers/application_notes/tree/master/BebopStreamVLC. You need to ‘turn on’ the stream before you can access it. The article suggests using one of the ARSDK sample programmes… but Im not sure if this still works. When it did it was easy to link it into opencv (http://cvdrone.de/stream-bebop-video-with-python-opencv.html - ignore the nodeJS stuff… I don’t think that works anymore at all.). The only problem with this method is the lag could be a few seconds. I also found it would occasionally lock up, but simply restarting the capture got it up again. I did find that the bebopstartstream program would sometimes not work, but this may be because I was running it in a virtual machine.
  3. It seems the latest SDK has gone back to delivering video in the navdata. I discovered this when I gave up on windows and python by using the ARSDK compiled for Linux and the BebopSample program. The difficulty I have encountered with this method (albeit only having worked on it for a day) is that the frames are sent to mplayer where they are decoded… getting them into opencv is undocumented. It seems they need to be decoded from H264 and converted to an image matrix… but thats far from trivial and there appears to be no guidance on it that I can find.

As I say…this may not be accurate, but represents my learning curve.

I do have to say that the SDK documentation is awful… the fact that the content of custom structs like ARCONTROLLER_Frame_t are not explained is extremely frustrating and the sample files are not particularly comprehensive. I would like to believe that this is just a facet of the modern coding culture where immature code is released to the community to develop with and that one day it will be fully documented… but I somehow doubt it, so all those articles explaining the deprecated mechanisms of video capture will continue to mislead people for years to come.

I will take a look at porting the pyparrot code into my project that uses method 2 above and see where I get to… watch this space.


#8

ROS and ros bridge sounds more easy.


#9

Hello. Awmt102
I have the error about


Could you give some tip? I am using the bebop2.


#10

It may well be - I’m looking into that as well (https://github.com/AutonomyLab/bebop_autonomy). Although my first assessment is that it also has its quirks and not particularly clear documentation. The sample I tried to run crashed and I can find no github projects of anyone actually using it. But again, I have only been looking at it for a few hours so far.


#11

Hi freaad,

Is that from the katarina project? it looks very similar to line 71 here… but yours is on line 132.


#12

Yes It is katarina project. The question is that I am using the 4.4 firmware is it the problem?


#13

I’m afraid I don’t know the details of how katarina works. It also looks like you are using a different version from mine as my line 132 is different than yours.

Its possible the firmware is the issue - katarina is several years old and as I alluded to earlier bebop is continuously evolving.

Sorry I cant help more.


#14

It does say DONE before it says that so it seems like it finished running and then the threading had issues. And no, I have no idea about windows errors. I did all my testing on my Mac laptop.


#15

Yeah and I also didn’t write Katerina. I looked at it when writing my code but not very much since it wasn’t using the latest SDK. I read the ByBop code more than it. So I’m happy to help with pyparrot but no clue on Katerina since it isn’t mine.


#16

@awmt102 Thank you for that list for the video! That is about what I had figured out from the forums also but hadn’t had a chance to track it down. The mambo is using RTSP and it is very easily imported into opencv. I hope the bebop RTSP stream can be grabbed somehow. If you get that working, let me know. I simply don’t have time until break to look at it (3 weeks or so).


#17

Hi,

Here are some clarifications about the video streaming in the Bebop (& other SDK products):

  1. The first versions used the “ARStream” protocol, described in the protocol documentation. This protocol is no longer supported on latest products, but is still the only one available on the Jumping Sumos (&variants) and the PowerUP FPV.

  2. The Bebop (&Bebop2, Disco…) does use a RTP-compatible streaming protocol (named “ARStream2” in parts of the SDK. It does NOT use RTSP as a session establishment protocol, but rather ARCommands sent through our SDK. This means that you need a SDK client to actually start (& maintain) the video streaming, but once started, you can receive it in any RTP-compatible client (This was the point of the BebopStreamVLC application note). While the AN might be a little bit outdated, this is still the current state of the video streaming on the Bebop, and you should be able to do it with a python client instead of the BebopStreamVLC sample.

  3. As said in 2.: No, we did not change the streaming. What you see in the sample is just an integrated RTP Client within the SDK. The video callbacks are indeed providing raw h.264 frames, and you’ll need to decode them before doing any processing.

  4. The Mambo does use a complete RTSP/RTP stack, which means that the video streaming can be started without any SDK connection (just connect a RTSP client to rtsp://192.168.99.1/media/stream2 and you should get the video stream). The difference with the Bebop mainly comes from hardware differences (the SDK client & the camera are on two different processors).

As a side note, I ran a quick test of the 2. point with Bybop (my own python implementation of the SDK), by doing the following things:

  • I added the 'arstream2_client_[stream/control]_port' keys into the handshake, so the drone known the ports to use, and created start_stream()/stop_stream() functions for the BebopDrone object. This commit is pushed on the Bybop master branch.
  • I copied the bebop.sdp file from the Application Note.
  • While connected to the Bebop wifi network, I started Bybop interactive.py sample and, when connected, called drone.start_streaming()
  • Started vlc bebop.sdp on another shell.

As I properly got the stream in VLC (there is a noticeable latency, but this is due to VLC doing buffering for network streams by default). You could probably adapt this to get the stream into any RTP-compatible client :wink:

I hope that this clarified some misconceptions about the video streaming on the Bebop !

Regards,
Nicolas.


#18

@Nicolas Thank you!!! I will use this to get the video going in pyparrot too for the Bebop! I have it already for the mambo since it is literally a call to RTSP.


#19

@CaptainSaavik I have run a test of pyparrot and a Bebop2 on windows. I had to change line 293 in wifiConnection.py from:

tcp_sock.send(bytes(json_string, ‘utf-8’))

to

tcp_sock.send(bytes(str(json_string).encode(“utf-8”)))

But after that it worked fine. I managed to get the speedX out of the data with this:

print bebop.sensors.sensors_dict[‘SpeedChanged_speedX’]

But I doing the same for Y and Z does not seem to work… any thoughts on why that might be?


#20

Figured out part of the speedY issue… there is an extra indent on a return statement that was returning from a for loop after the first iteration, so the extra parameters never got stored in the dictionary. Line 96 (the return) in DroneSensorParser.py is the one. The only problem is that removing this return now runs the loop a few times (3 in the case of speed) but only returns the last one (speedZ). I’ll keep looking at it. If I get a chance I’ll do this properly and create a pull request if I figure it out.