Yes, you need a high-end nvidia GPU machine dedicated to Sphinx, i.e. don’t expect good performance if you run any computational intensive CPU or GPU task on the same hardware.
The variation of the real-time factor might be caused by other workload running on the same hardware. Can you reproduce this issue without those workloads?
A 0.7 real-time factor is a decent value for most use cases but you should keep aiming for 0.90-0.95 when the simple front cam parameter is activated with a high-end GPU (on a dedicated hardware).
The firmware can tolerate some small variations of the real-time factor. However if the real-time factor varies over time (like +/-10-30%+ in one second), the simulated firmware might not appreciate (it might encounter error and/or crash). You should really try to minimize the variation of the real time factor. Ideally, any other workload should be offloaded to another computer.
That being said, the simulated firmware can still run properly if the real-time factor goes well below 0.4 if you can keep it steady there. The problem with low real-time factors is that Olympe is running in the real-time clock and might time out waiting for a video frame or an SDK event from a slowly running simulated drone firmware…
I don’t know what you mean. Drop the camera sensor support from your application? What’s your application?
As I said, I wouldn’t be surprised if your video quality issues are due to an other heavy computational task running on the same hardware as Sphinx. If you want to make real-time computer vision, you should probably offload that task to another computer. If post-processing is an option then you would just have to process the recorded
.mp4 (recording the
.yuv stream is usually a bad idea unless you have access to a huge storage disk array on your workstation…).