TARGET_CPU of ComputerVision services


I’m trying to create ComputerVision based mission using AirSDK.
I can see that the Target CPU in of classic folder is hi3559 ( Can you confirm that our computer vision code will run on this SoC ? Is there any other more powerfull CPU that I can deploy my code onboard ?



The SoC in ANAFI Ai is indeed hi3559. AirSDK mission code runs on the ARM Cortex A73 and A53 cores along with the system on Linux and all other processes (flight control, computer vision algorithms, obstacle avoidance, media recording and streaming…).

There is no way to answer your question, as it depends totally on what algorithm you are running, on which source images (resolution and framerate) and whether you are using hardware optimizations (such as NEON) or not.

In any case if you are running your algorithm at the same time as all drone features (OA, 4K video recording, etc) there isn’t that much CPU time left.


Hi @Akaaba,

Thank you for your answer.

We bought the Anafi AI to run computer vision algorithms onboard and we’re surprised of the processing time we can get on custom AirSDK mission. To give you an example, the only opencv aruco detection function takes around 50ms for a 180p grayscale image when it takes 1ms on laptop, and it goes to 200ms when the drone is flying.

I know we have to work on optimization to be able to have embedded computer vision algorithm, but we are surprised you have nice computer vision algorithms in real time with the same board where a simple aruco detection takes 200ms.

Besides, you say that AirSDK mission code runs along with the other processes (flight control, computer vision algorithms, obstacle avoidance, media recording and streaming…). Is there a way to disable unused computer vision algorithm to get more CPU time left ?

Finally, you talk about hardware optimizations. Are OpenCV libs compiled with all hardware optimizations flags ?


Where are you getting a 180p grayscale image from? Do you resize it or convert it to another format?

There is no way to disable running features on the drone, apart from not using the cameraman feature, not using obstacle avoidance if not needed, and not recording video at full specs (4K60, 4K30 HDR10, or 1080p120) while your algorithms are running.

To my knowledge OpenCV is built with all available optimizations as we are using it for our own CV algorithms.

I’m downsampling the 720p image from fcam_streaming to 180p to reach 200ms of processing time in flying. Otherwise, it’s more like 1 second to detect an april tag in 720p image.

Unfortunately, we don’t use cameraman feature, neither recording.

fcam_streaming is 1080p by default and the resolution varies down to 144p when connected to a FF7 streaming client.

If you are downsampling the video using the CPU, it is likely that it takes a lot of time, maybe more than the aruco detection. This is something that you should benchmark.

It’s what I did but it seems like it’s the aruco detection that takes most of the time.

It’s not easy to debug as I cannot get log because it’s crypted and I don’t know how to write my own log that I could download after a fly (Get log of real drone - #3 by ClementLeBihan), as adb is only available for parrot developers …