Video-stream rendering quality

Hey there!

I’ve been playing around for a while with Anafi’s video-streaming in Olympe and I found it great in how easy is to process it in real-time or after the drive. However, it escapes to my knowledge why the video rendering is so bad sometimes. I.e,

  1. In this example the video recording is good, ignoring the first 2-3 seconds which is usual.
  2. However, in this recorded run the quality is extremely bad.

Does it happen for a certain reason? How can I mitigate the effects?

NB: Both videos were recorded from the h264 stream while applying a Canny edge detector filter to the YUV stream.

Hi,

Do you stream the video over a wifi interface?

Also you should verify that there is nothing on your workstation (other than Sphinx) that might be using too much CPU and/or GPU resource. The video streaming is sensible to the real-time factor (the ratio between the simulated clock and the real-clock). Ideally, this factor should be nearly equal to 1.0 and steady over (real) time. The current real-time factor is displayed below the main 3D view in the Sphinx GUI.

Hey @ndessart,

No, it was streamed in the same pc I was running the Sphinx simulation. I will check for the real-time factor. Is there any way to ensure that it stays close to 1.0? What if it usually goes from 0.4 to 0.7 approx.? Should I drop then the camera sensor support as of now? Would the same happen if I instead use the .YUV stream to record & do computer vision and then ffmpeg the .yuv to .mp4?

Thanks in advance!

Yes, you need a high-end nvidia GPU machine dedicated to Sphinx, i.e. don’t expect good performance if you run any computational intensive CPU or GPU task on the same hardware.

The variation of the real-time factor might be caused by other workload running on the same hardware. Can you reproduce this issue without those workloads?

A 0.7 real-time factor is a decent value for most use cases but you should keep aiming for 0.90-0.95 when the simple front cam parameter is activated with a high-end GPU (on a dedicated hardware).

The firmware can tolerate some small variations of the real-time factor. However if the real-time factor varies over time (like +/-10-30%+ in one second), the simulated firmware might not appreciate (it might encounter error and/or crash). You should really try to minimize the variation of the real time factor. Ideally, any other workload should be offloaded to another computer.

That being said, the simulated firmware can still run properly if the real-time factor goes well below 0.4 if you can keep it steady there. The problem with low real-time factors is that Olympe is running in the real-time clock and might time out waiting for a video frame or an SDK event from a slowly running simulated drone firmware…

I don’t know what you mean. Drop the camera sensor support from your application? What’s your application?

As I said, I wouldn’t be surprised if your video quality issues are due to an other heavy computational task running on the same hardware as Sphinx. If you want to make real-time computer vision, you should probably offload that task to another computer. If post-processing is an option then you would just have to process the recorded .mp4 (recording the .yuv stream is usually a bad idea unless you have access to a huge storage disk array on your workstation…).

1 Like