@cartovarc, I believe you don’t need to use pdraw to get video stream into Olympe. It’s already integrated. You can see that in event messages.
So far, I managed to get the stream data into the callback method but I am having trouble getting the frame out of this data. I understand that the frame size is 1280x720 from the event messages.
I can access the data by indexing like this:
def video_data_callback(frame_bytes):
… bytearray = numpy.zeros(1280 * 720,numpy.uint8)
… for i in range(1280 * 270):
… bytearray[i] = frame[i]
This is single channel only. however when I reshape into Image I am not getting the correct frame the camera is capturing. If I try to access the index beyond 1280*270 I will get this error
Fatal Python error: Segmentation fault
Thread 0x00007f773eb63700 (most recent call first):
File “/home/petronas-tareq/code/parrot-groundsdk/packages/olympe/src/olympe/_private/pomp_loop_thread.py”, line 242 in _wait_and_process
File “/home/petronas-tareq/code/parrot-groundsdk/packages/olympe/src/olympe/_private/pomp_loop_thread.py”, line 230 in run
File “/usr/lib/python3.6/threading.py”, line 916 in _bootstrap_inner
File “/usr/lib/python3.6/threading.py”, line 884 in _bootstrap
I’ll try to answer all your questions here but, in a nutshell, Olympe streaming API is currently broken.
Some late changes in Olympe and in the libpdraw API introduced some bugs in the Olympe streaming feature that went unnoticed until now.
We are in the process of pushing some corrections on Github. Hopefully, this shouldn’t take too long.
The first issue is that the ffmpeg software decoder is not activated in the build configuration.
But even when you apply the following fix :
This is weird, it should have gotten you a little farther. The fix works for Tareq.
Anyway, you would be stuck on the aforementioned issues so there is no point in retrying for now.
Once the streaming issues are fixed, I will update the documentation with an numpy/opencv example that should clarify a few things.
Yes, libpdraw is used internally by Olympe so there is no need to use the PDrAW application alongside Olympe.
Thank you for checking this out Do you already know if this is an internal Olympe issue that could be “quickly” fixed? Or a firmware issue that would require a new firmware release?
Besides the build configuration issue, this will entirely be fixed in Olympe (in the olympe.arsdkng.pdraw module). No need for a new firmware release or something like that.
Olympe currently doesn’t correctly use the libpdraw API. For the decoded YUV stream, it currently interprets a pointer to a complex opaque structure as a frame buffer of 8 bytes…
We are currently in the process of validating the correction.
Once the problem is fixed, you will be able to use Olympe to connect to any drone (simulated or “real”) and get the video stream.
This is a packaging issue, the script is missing from the 1.2.1 release of sphinx. This will be fixed in the next Sphinx release (1.2.2 or 1.3 that should come soon). Thank you!
Is it possible to fetch the video stream in batches in order to process it / analyze it in real time? Is there any example on how to e.g. apply a Kalman filter? It’d be of great help