Drone: Parrot Anafi USA
Jetpack Version 5.1.1
OS: Linux4Tegra Ubuntu 20.04
Hello, I am developing some software using GroundSDK-Tools to obtain the RTSP video stream from my Anafi USA and re-stream it using gstreamer and OpenCV. I am using the C/C++ PDrAW libraries in a similar way as described in this post.
I am using pdraw-backend to obtain the raw frames and streaming them using OpenCV. The problem is that PDrAW is software decoding the incoming video stream and it is using up too much of the Xavier’s CPU. Is it possible for PDrAW to decode the Anafi USA’s stream in hardware, specifically on the NVIDIA Jetson Orin/Xavier boards?
I tried getting NVDEC/CUVID working by building PDrAW with nv-codec-headers installed. However, the Jetson does not appear to directly support NVDEC and does not include libnvcuvid.so, as a result PDrAW just crashes. It appears that the Jetson platform supports NVDEC through V4L2 which is not supported by the version of FFmpeg included with the PDrAW libraries. I also tried getting NVIDIA’s fork of FFmpeg which included NVDEC through V4L2 to work but it appears that I would need to re-write a large portion of the PDrAW dependencies (ffmpeg, libvideo-decode, …) to get this to work.
Before I commit a large amount of time to a project like this I would like to know if there are any other methods available for getting hardware accelerated decoding working with PDrAW on the Orin NX.
Any help would be appreciated. Thank you.