Which video feed should we use for computer vision services


I’m developping computer vision services, and when the drone is flying, sometimes the do_step function isn’t called during few seconds (that is critical for our autonomous fly).

I’m trying to debug but It’s hard as it’s only happening when flying with real drone. With simulated drone, everything works perfectly. As we can’t have access to logs of drone and don’t know how to write own log, I’m wondering which video feed should I use for computer vision service using frontal camera.

I started from airsdk-sample and changed VIPC STREAM to use fcam_streaming. Is it the best way to create computer vision services for autonomous fly ? I can see lot of mutex mecanism in the service, and I’m wondering if this feed is the best one to do what I want as fcam_streaming is also used by the streaming process.

Otherwise, is there any other way to do front cam based computer vision ?


Hi guys,

Can anoyone help me ?
I tryied “fcam_streaming” but my process has lot of trouble to lock the mutex (may be because the drone is already using the image to stream). I also tryied “fcam_recording” but I’m not sure it’s the good way to do computer vision service.

Any help ?



fcam_streaming (Video feeds - 7.7) is appropriate for computer vision services.

Do you have an idea of the computation time of your algorithm/service? Can you try a decimation from 30Hz to 10/5 Hz of the input images?


Hi @Ronan,

We do cv::detectMarkers on the resized image and it takes lot of times (~500ms on 720p images). When we do it on fcam_streaming, our algorithm isn’t executed during few seconds, every few seconds … It’s a mess for tracking algorithm.

How to do decimation from 30Hz to 10/5Hz ?


This topic was automatically closed after 30 days. New replies are no longer allowed.