Getting video stream to Olympe (Python)

Hi again :slight_smile:

I am trying with:
drone.set_streaming_output_files(h264_data_file='h264_data_file.h264', h264_meta_file='h264_meta_file’)

But i get this error:

@ndessart what am I doing wrong? :frowning:
@Tareq If I find a solution, I will share it here.

Thanks!

@nest @Tareq look at this https://developer.parrot.com/docs/pdraw/

@cartovarc, I believe you don’t need to use pdraw to get video stream into Olympe. It’s already integrated. You can see that in event messages.
So far, I managed to get the stream data into the callback method but I am having trouble getting the frame out of this data. I understand that the frame size is 1280x720 from the event messages.
I can access the data by indexing like this:

def video_data_callback(frame_bytes):
… bytearray = numpy.zeros(1280 * 720,numpy.uint8)
… for i in range(1280 * 270):
… bytearray[i] = frame[i]

This is single channel only. however when I reshape into Image I am not getting the correct frame the camera is capturing. If I try to access the index beyond 1280*270 I will get this error

Fatal Python error: Segmentation fault

Thread 0x00007f773eb63700 (most recent call first):
File “/home/petronas-tareq/code/parrot-groundsdk/packages/olympe/src/olympe/_private/pomp_loop_thread.py”, line 242 in _wait_and_process
File “/home/petronas-tareq/code/parrot-groundsdk/packages/olympe/src/olympe/_private/pomp_loop_thread.py”, line 230 in run
File “/usr/lib/python3.6/threading.py”, line 916 in _bootstrap_inner
File “/usr/lib/python3.6/threading.py”, line 884 in _bootstrap

I am also not sure how get the RGB image?

Anyone can help?

Hello everyone,

I’ll try to answer all your questions here but, in a nutshell, Olympe streaming API is currently broken.
Some late changes in Olympe and in the libpdraw API introduced some bugs in the Olympe streaming feature that went unnoticed until now.

We are in the process of pushing some corrections on Github. Hopefully, this shouldn’t take too long.

The first issue is that the ffmpeg software decoder is not activated in the build configuration.
But even when you apply the following fix :

The olympe.Drone.set_streaming_output_files and olympe.Drone.set_streaming_callbacks functions are mostly broken.

I’ll get back to you ASAP.

This is weird, it should have gotten you a little farther. The fix works for Tareq.
Anyway, you would be stuck on the aforementioned issues so there is no point in retrying for now.

Once the streaming issues are fixed, I will update the documentation with an numpy/opencv example that should clarify a few things.

Yes, libpdraw is used internally by Olympe so there is no need to use the PDrAW application alongside Olympe.

Regards,

Nicolas

1 Like

Thank you for checking this out :slight_smile: Do you already know if this is an internal Olympe issue that could be “quickly” fixed? Or a firmware issue that would require a new firmware release?

Besides the build configuration issue, this will entirely be fixed in Olympe (in the olympe.arsdkng.pdraw module). No need for a new firmware release or something like that.

Olympe currently doesn’t correctly use the libpdraw API. For the decoded YUV stream, it currently interprets a pointer to a complex opaque structure as a frame buffer of 8 bytes… :upside_down_face:

We are currently in the process of validating the correction.

I’ll get back to you as soon as I have something.

Any update on this please?

Yes, could you provide a date for the fix? It’s blocking our developments :slight_smile:

OK, this took longer than I thought, sorry. We have actually hit other issues and the fix is no longer entirely in the Python side of Olympe.

We are almost there. I’ll get back to you as soon as I have more information on a release date.
Thank you for you patience.

Thank you for your answer @ndessart. The issues appears to exist also with a simulated drone. By the way I could not find the script:

configure_drone_as_wifi_adapter

on the documentation page of Sphinx here. Can you help me find it?
Thank you!

Once the problem is fixed, you will be able to use Olympe to connect to any drone (simulated or “real”) and get the video stream.

This is a packaging issue, the script is missing from the 1.2.1 release of sphinx. This will be fixed in the next Sphinx release (1.2.2 or 1.3 that should come soon). Thank you!

Hey there guys,

The issue you are having seems to be important to me as well, but I don’t have the level of understanding about video processing you seem to have.

As of now, I am using Olympe as Python module, running the following script on a real-life Anafi:

video_path = "/home/myuser/Documents/simulation-output/video-streaming/olympe-290519-1723.h264"

with olympe.Drone("192.168.42.1") as drone:
    drone.connection()
    
    drone.set_streaming_output_files(h264_data_file=video_path)
    drone.start_video_streaming()
    
    drone(TakeOff()).wait()
    sleep(4)
    
    drone(Landing()).wait()
    
    drone.stop_video_streaming()
    
    drone.disconnection()

However, when doing this, I got an error from the video streaming recording config:

-----------------------------------------------------------------
AttributeError                  Traceback (most recent call last)
_ctypes/callbacks.c in 'calling callback function'()

~/Documents/olympe-groundsdk/packages/olympe/src/olympe/arsdkng/pdraw.py in <lambda>(*args)
    403         self.thread_loop.add_fd_to_loop(
    404             self.streams[id_]['video_queue_fd'],
--> 405             lambda *args: self._video_sink_queue_event(*args),
    406             id_
    407         )

~/Documents/olympe-groundsdk/packages/olympe/src/olympe/arsdkng/pdraw.py in _video_sink_queue_event(self, fd, revents, userdata)
    505 
    506         # process all available buffers in the queue
--> 507         while self._process_buffer(id_):
    508             pass
    509 

~/Documents/olympe-groundsdk/packages/olympe/src/olympe/arsdkng/pdraw.py in _process_buffer(self, id_)
    564         self._process_outputs(frame,
    565                               metadict,
--> 566                               self.streams[id_]['type'])
    567 
    568         # Once we're done with this frame, dispose the associated frame buffer

~/Documents/olympe-groundsdk/packages/olympe/src/olympe/arsdkng/pdraw.py in _process_outputs(self, frame, metadata, mediatype)
    479                     # h264 files need a header to be readable
    480                     self._write_h264_header(f, self._get_media_info(
--> 481                         od.PDRAW_VIDEO_MEDIA_FORMAT_H264))
    482             f.write(frame)
    483 

~/Documents/olympe-groundsdk/packages/olympe/src/olympe/arsdkng/pdraw.py in _write_h264_header(self, fobj, media_info)
    622 
    623         start = bytearray([0, 0, 0, 1])
--> 624         info = media_info.video.union.h264
    625 
    626         self.logging.logD("sps: %s, pps: %s" % (info.spslen, info.ppslen))

**AttributeError: 'struct_pdraw_media_info' object has no attribute 'video'

What can I do to deal with this AttributeError?

Thanks in advance!

espetro

Hi @ndessart. Do you have an update for us on the bug fix?

1 Like

Hi espetro,

The video streaming in Olympe is currently broken. This is just one of the symptom.
I am sorry but there is nothing you can do about it right now.

We are in the process of publishing a fix for the video streaming in Olympe (among other things).

@edvard
This took longer than I thought, sorry.
We are now targeting a 1.0.1 GSDK release by the end of the month.

Nicolas

@ndessart, any update on the release of 1.0.1 GSDK?

Hi everyone,

I am pleased to inform you that the GSDK 1.0.1 update has been released !

To update your workspace to the latest release, execute the following commands from your parrot-groundsdk workspace directory :

$ repo sync
$ ./build.sh -p olympe-linux -A all final -j

The Olympe streaming documentation has been updated and is available here.

Thanks for you patience

Nicolas

3 Likes

Any chance you’ll be implementing the ability to change the resolution of the stream?

No, I am afraid that this won’t be possible.

For the Anafi, the video stream resolution is tied to the current camera mode.

In the video recording mode, the streaming resolution is 1280x720. In the photo mode, the video streaming resolution is 1024x768.

Is it possible to fetch the video stream in batches in order to process it / analyze it in real time? Is there any example on how to e.g. apply a Kalman filter? It’d be of great help :smile:

Thanks in advance!