Thank you for releasing the ANAFI SDK, it has been great so far. However, I have issue getting the video stream into Olympe. I would like to use the video for some real-time image processing and control.
I used Drone.set_streaming_callbacks to set call_back_back methods, then called Drone.start_video_streaming() to start streaming.
When I set a call_back for raw data (raw_data_cb), the method isn’t being called at all.
When I set a call_back for encoded data (h264_data_cb), the method is called and it gives olympe_deps.LP_c_ubyte object. This object has .contents which seems empty, c_ubyte(0). The h264_meta_cb also provides call_back like this:
{‘format’: ‘h264’, ‘h264’: {‘format’: 2, ‘is_complete’: 1, ‘is_sync’: 0, ‘is_ref’: 1}, ‘has_errors’: 1, ‘is_silent’: 0, ‘ntp_timestamp’: 20862132889, ‘ntp_unskewed_timestamp’: 20862132441, ‘ntp_raw_timestamp’: 20862154778, ‘ntp_raw_unskewed_timestamp’: 20862154330, ‘play_timestamp’: 20862132224, ‘capture_timestamp’: 55483817, ‘local_timestamp’: 29305600554, ‘metadata’: {‘drone_quat’: {‘w’: 0.71746826171875, ‘x’: -0.00152587890625, ‘y’: 0.00115966796875, ‘z’: 0.696533203125},…
I am trying this in simulation (Sphinx) with default anafi4k.drone settings which has the front cam set to true:
with_front_cam=“true”
simple_front_cam=“true”
with_gimbal=“true”
Few event messages that may be useful to debug:
13/05/2019 15:55:44.114295 _ready_to_play _ready_to_play(1) called
I rtsp_client: send RTSP request PLAY: cseq=4 session=515dc65b8dcf5156
I rtsp_client: response to RTSP request PLAY: status=200(OK) cseq=4 session=515dc65b8dcf5156 req_status=OK
I vstrm: receiver: init_source: ssrc=0x0702a52a seq=0
I vstrm: receiver: init_seq: seq=0
I pdraw_dmxstrm: new output media
I pdraw_source: ‘StreamDemuxerNet’: add port for media id=1 type=VIDEO
E vdec_ffmpeg: vdec_ffmpeg_new:760: codec not found err=2(No such file or directory)
E pdraw_decavc: AvcDecoder:116: vdec_new err=2(No such file or directory)
I pdraw_sink: ‘AvcDecoder’: link media id=1 type=VIDEO
E pdraw_decavc: decoder is not created
E pdraw_session: addDecoderForMedia:1629: decoder->start err=71(Protocol error)
E pdraw_session: onOutputMediaAdded:2033: addDecoderForMedia err=71(Protocol error)
13/05/2019 15:55:44.276527 _media_added _media_added id : 1
I pdraw_element: ‘VideoSink’: element state change to CREATED
I pdraw_sink: ‘VideoSink’: link media id=1 type=VIDEO
I pdraw_element: ‘VideoSink’: element state change to STARTING
I pdraw_element: ‘VideoSink’: element state change to STARTED
I pdraw_source: ‘StreamDemuxerNet’: link media id=1 type=VIDEO (channel key=0x7fe16406b780)
What am I missing here? and What is the best way to get the video stream from camera for real-time image processing and control?
Sorry, this is a build configuration issue that will be fixed in a future release.
The software decoder is simply not enabled in the alchemy build configuration file.
The configuration file ./products/olympe/linux/config/ffmpeg-libav.config should contain the following line: CONFIG_FFMPEG_AVC_DECODING=y
instead of # CONFIG_FFMPEG_AVC_DECODING is not set
Could you retry getting the video stream after applying this fix ?
# cd ~/code/parrot-groundsdk
$ sed -ie 's/# \(.*\)_AVC_DECODING.*/\1_AVC_DECODING=y/' ./products/olympe/linux/config/ffmpeg-libav.config
$ ./build.sh -p olympe-linux -A all final -j
Thank you @ndessart. Now I callback method gets called. However, as pointed by @cartovarc, I am not sure how to extract the frame out from the olympe_deps.LP_c_ubyte object. I couldn’t find any info about this. I tried to look into olympe_deps class but no luck on finding how to get the frame out.
@cartovarc, I believe you don’t need to use pdraw to get video stream into Olympe. It’s already integrated. You can see that in event messages.
So far, I managed to get the stream data into the callback method but I am having trouble getting the frame out of this data. I understand that the frame size is 1280x720 from the event messages.
I can access the data by indexing like this:
def video_data_callback(frame_bytes):
… bytearray = numpy.zeros(1280 * 720,numpy.uint8)
… for i in range(1280 * 270):
… bytearray[i] = frame[i]
This is single channel only. however when I reshape into Image I am not getting the correct frame the camera is capturing. If I try to access the index beyond 1280*270 I will get this error
Fatal Python error: Segmentation fault
Thread 0x00007f773eb63700 (most recent call first):
File “/home/petronas-tareq/code/parrot-groundsdk/packages/olympe/src/olympe/_private/pomp_loop_thread.py”, line 242 in _wait_and_process
File “/home/petronas-tareq/code/parrot-groundsdk/packages/olympe/src/olympe/_private/pomp_loop_thread.py”, line 230 in run
File “/usr/lib/python3.6/threading.py”, line 916 in _bootstrap_inner
File “/usr/lib/python3.6/threading.py”, line 884 in _bootstrap
I’ll try to answer all your questions here but, in a nutshell, Olympe streaming API is currently broken.
Some late changes in Olympe and in the libpdraw API introduced some bugs in the Olympe streaming feature that went unnoticed until now.
We are in the process of pushing some corrections on Github. Hopefully, this shouldn’t take too long.
The first issue is that the ffmpeg software decoder is not activated in the build configuration.
But even when you apply the following fix :
This is weird, it should have gotten you a little farther. The fix works for Tareq.
Anyway, you would be stuck on the aforementioned issues so there is no point in retrying for now.
Once the streaming issues are fixed, I will update the documentation with an numpy/opencv example that should clarify a few things.
Yes, libpdraw is used internally by Olympe so there is no need to use the PDrAW application alongside Olympe.
Thank you for checking this out Do you already know if this is an internal Olympe issue that could be “quickly” fixed? Or a firmware issue that would require a new firmware release?
Besides the build configuration issue, this will entirely be fixed in Olympe (in the olympe.arsdkng.pdraw module). No need for a new firmware release or something like that.
Olympe currently doesn’t correctly use the libpdraw API. For the decoded YUV stream, it currently interprets a pointer to a complex opaque structure as a frame buffer of 8 bytes…
We are currently in the process of validating the correction.
Once the problem is fixed, you will be able to use Olympe to connect to any drone (simulated or “real”) and get the video stream.
This is a packaging issue, the script is missing from the 1.2.1 release of sphinx. This will be fixed in the next Sphinx release (1.2.2 or 1.3 that should come soon). Thank you!