Running Sphinx inside Docker container

Hello,

I am trying to deploy Sphinx in a remote GPU server to get all the hard work done remotely. To this end, I am planning to run sphinx-server inside a Docker container.

However, due to the firmwared service, I am not able to run the simulator; I get the following error:

System has not been booted with systemd as init system (PID 1). Can’t operate.

Is there any way to get Sphinx server up and running in a container, or otherwise in a container-like environment?

Thanks in advance!

2 Likes

Hello,

That is really cool and except for the docker part this should be relatively easy. Just follow the documentation https://developer.parrot.com/docs/sphinx/remote.html on the subject.

This probably means that your docker container does not run systemd as PID1. systemd is responsible (among other things) of starting system services like firmwared. Without systemd running, you just can’t invoke systemctl start firmwared. Try sudo firmwared & to launch firmwared without systemd running.

Running sphinx-server in a docker container is relatively hard. You’ll probably hit other issues :

  • you need to create an X server DISPLAY (or share the host one)
  • sphinx needs access to the nvidia driver libraries if you want to use your GPU to its full potential
  • you need to give the container a lot of privileges for firmwared to run properly
  • (optionally) share your host wifi interface

Off the top of my head, the docker run command line should look like :

   # /tmp directory that will be bind mounted inside the docker container
   # This should be a regular filesystem (ext4 ?) and not some
   # exotic union fs (overlayfs, aufs,...)
   tmp_dir="/tmp"
   [[ -d "${tmp_dir}" ]] || mkdir -p "${tmp_dir}"
   # Path to the host nvidia drivers
   NVIDIA_PATH=/usr/lib/nvidia-375
   docker run -it --rm \
   --privileged \
   `# Graphics setup` \
   -v /tmp/.X11-unix:/tmp/.X11-unix:ro      `# share the host X server socket` \
   -e DISPLAY="${DISPLAY}"                  `# share the host display` \
   -v ${NVIDIA_PATH}:${NVIDIA_PATH}:ro      `# mount bind host nvidia drivers` \
   -e NVIDIA_PATH=${NVIDIA_PATH} \
   -v "${tmp_dir}":/tmp                     `# Note that /tmp in the container is` \
                                            `# not in an overlay filesystem.` \
                                            `# This is important because firmwared` \
                                            `# will (ab)use overlayfs there.` \
   --net=host                               `# (optionally) share the host net` \
                                            `# namespace to use your wifi interface` \
 
  <your_docker_image_name> <args...>

Please note that, firmwared is a container engine of its own (like docker) and it doesn’t really like to be sitting on top of a union filesystem like overlayfs (the default filesystems used by docker and firmwared). At Parrot, we’ve already encountered some kernel related issues with firmwared in docker and this is why it’s important that inside the container /tmp is part of a regular filesystem: ext4, eventually tmpfs but not overlayfs or aufs.

Regards

Nicolas

Hey Nicolas,

Thanks for the quick answer. I already had the display and Gpu part done, but I’ll definitely give the FS stuff a try. As for the connection, I can connect to the Olympe framework. Do you know if it works for models aside from the Anafi?

Thanks!

You should be able to use the following method to connect to a Bebop2 drone with Olympe:

import olympe
import olympe_deps as od
drone = olympe.Drone("10.202.0.1", drone_type=od.ARSDK_DEVICE_TYPE_BEBOP_2)

The drone_type Drone constructor parameter is mandatory for pre-Anafi drones.
It’s just an integer ID defined in libarsdk (through the olympe_deps module).

  • Anafi drones (Anafi4K and Anafi Thermal) are fully supported by Olympe
  • Pre-Anafi wifi drones (Bebops, Disco, …) should work but we might not be able to a provide full support for them.
  • Bluetooth drones (i.e. minidrones) won’t work with Olympe.

Regards

Nicolas

Hi ,

I’m basically interested in the same than you and encounters issues with firmwared as well.

First, I would like to share with you the Dockerfile and running commands : docker/headless-nvidia-parrot-sphinx-host-display at master · jeremyfix/docker · GitHub

This dockerfile uses a Dummy X11 server on the host. The display and GPU part are done as well. Maybe we can share our efforts to succeed in running sphinx within a container.

The issues I have by running /bin/firmwared within the container are :

root@sh15:/# firmwared
[I] ut_module_init: the content of /proc/sys/kernel/modprobe doesn’t point to valid executable ‘modprobe’, defaulting to /sbin/modprobe
I firmwared_main: firmwared[12] starting
E firmwared_main: initial_cleanup_files scandir: No such file or directory
E apparmor_config: opening /sys/kernel/security/apparmor/profiles failed: No such file or directory
I firmwared_firmwares: indexing firmwares from folder ‘/usr/share/firmwared/firmwares/’
I firmwared_firmwares: done indexing firmwares
E apparmor_config: AppArmor is not enabled or installed, please see the instructions for your distribution to enable it
E firmwared_main: apparmor_init: Function not implemented
E firmwared_main: init_subsystems: Function not implemented
I firmwared_main: firmwared[12] exiting
root@sh15:/#

Jeremy

1 Like

@espetro , @ndessart
I can share with you some progress with the docker image at [1] after installing some other packages. I think I succeeded to start firmwared , there is one error but it does not seem to be fatal , is it ?

root@sh15:/# firmwared &
I firmwared_main: firmwared[15] starting
E firmwared_main: initial_cleanup_files scandir: No such file or directory
I firmwared_firmwares: indexing firmwares from folder ‘/usr/share/firmwared/firmwares/’
I firmwared_firmwares: done indexing firmwares

Then running sphinx-server , there are a couple of errors that are indeed fatal. And these issues are raised by firmwared.

root@sh15:/# mount -tsecurityfs securityfs /sys/kernel/security &
root@sh15:/# sphinx-server /opt/parrot-sphinx/usr/share/sphinx/worlds/outdoor_1.world /opt/parrot-sphinx/usr/share/sphinx/drones/bebop2.drone
Parrot-Sphinx simulator version 1.2.1

connecting to firmwared version: 1.2.1
Gazebo multi-robot simulator, version 7.0.1
Copyright (C) 2012-2015 Open Source Robotics Foundation.
Released under the Apache 2 License.
http://gazebosim.org

[Msg] Waiting for master.
[Msg] Connected to gazebo master @ http://127.0.0.1:11345
[Msg] Publicized address: 127.0.0.1
[Wrn] [ColladaLoader.cc:1763] Triangle input semantic: ‘TEXCOORD’ is currently not supported
[Wrn] [ColladaLoader.cc:1763] Triangle input semantic: ‘TEXCOORD’ is currently not supported
[Wrn] [ColladaLoader.cc:1763] Triangle input semantic: ‘TEXCOORD’ is currently not supported
[Wrn] [ColladaLoader.cc:1763] Triangle input semantic: ‘TEXCOORD’ is currently not supported
[Wrn] [ColladaLoader.cc:1763] Triangle input semantic: ‘TEXCOORD’ is currently not supported
[Wrn] [ColladaLoader.cc:1763] Triangle input semantic: ‘TEXCOORD’ is currently not supported
[Msg] created parameter server on http:8383
[Msg] connected to firmwared
[Msg] Preparation of firmware http://plf.parrot.com/sphinx/firmwares/ardrone3/milos_pc/latest/images/ardrone3-milos_pc.ext2.zip
I shd: wind: created: generation=2 sample_count=4000 sample_size=24 sample_rate=1000 metadata_size=125
W firmwared_log: ls: cannot access ‘/usr/share/firmwared/firmwares//*.firmware’: No such file or directory
W firmwared_log: stat: cannot stat ‘/usr/share/firmwared/firmwares//ardrone3-milos_pc.ext2.zip.a3223a5b-4bd8-9583-54ad-cb89d0a34087.firmware’: No such file or directory
[Msg] preparation of firmwares http://plf.parrot.com/sphinx/firmwares/ardrone3/milos_pc/latest/images/ardrone3-milos_pc.ext2.zip is at 18%

[Msg] preparation of firmwares http://plf.parrot.com/sphinx/firmwares/ardrone3/milos_pc/latest/images/ardrone3-milos_pc.ext2.zip is at 82%
[Msg] preparation of firmwares http://plf.parrot.com/sphinx/firmwares/ardrone3/milos_pc/latest/images/ardrone3-milos_pc.ext2.zip is at 100%
[Msg] firmware /usr/share/firmwared/firmwares//ardrone3-milos_pc.ext2.zip.a3223a5b-4bd8-9583-54ad-cb89d0a34087.firmware supported hardwares:
[Msg] milosboard
I firmwared_instances: init_command_line: ro_boot_console = ro.boot.console=
W firmwared_log: mount: wrong fs type, bad option, bad superblock on firmwared_638e9a1b22d5173d1c6461a0f6cd20a3d45169df,
W firmwared_log: missing codepage or helper program, or other error
W firmwared_log:
W firmwared_log: In some cases useful info is found in syslog - try
W firmwared_log: dmesg | tail or so.
E firmwared_instances: invoke_mount_helper init returned -125
W firmwared_log: umount: /var/cache/firmwared/mount_points/instances/638e9a1b22d5173d1c6461a0f6cd20a3d45169df/union: not mounted
E firmwared_instances: invoke_mount_helper clean returned -125
E firmwared_instances: install_mount_points
I apparmor_config: apparmor_remove_profile(638e9a1b22d5173d1c6461a0f6cd20a3d45169df)
W firmwared_log: /sbin/apparmor_parser: Unable to remove “firmwared_638e9a1b22d5173d1c6461a0f6cd20a3d45169df”. Profile doesn’t exist
E apparmor_config: /sbin/apparmor_parser exited with status 65024
E firmwared_instances: rmdir ‘/var/run/firmwared/638e9a1b22d5173d1c6461a0f6cd20a3d45169df/udev’ error 2
E firmwared_instances: rmdir ‘/var/run/firmwared/638e9a1b22d5173d1c6461a0f6cd20a3d45169df’ error 2
E firmwared_instances: invoke_mount_helper clean_extra returned -125
E firmwared_instances: invoke_mount_helper clean returned -125
E firmwared_instances: init_instance: mount.hook/init failed.
E firmwared_instances: instance_new(26b4c9adaa72df04d7821f9442cf04112232f549): Unknown error 1026
E firmwared_commands: command_process: mount.hook/init failed.
[Err] [Machine.cc:1256] Received error while preparing instance for machine bebop2: mount.hook/init failed.
[Msg] CleanupInstances
I firmwared_command_dropall: instances dropped 0/0
[Msg] CleanupFirmwares
[Msg] Firmware meretricious_stephanie[26b4c9adaa72df04d7821f9442cf04112232f549] unprepared (unmounted)
I shd: wind: closed

Maybe you have a clue on the issue.

Jeremy.

[1] docker/headless-nvidia-parrot-sphinx-host-display at master · jeremyfix/docker · GitHub

Hi,

I’ve just forked your repo, perform some corrections and created a pull request so that you can see what I had to change to make it work :

This seems to be the root cause of your error : “/var/cache/firmwared/” has to be a regular filesystem inside the container and not an overlay filesystem.

Thanks for your help ; It definitely helped to progress though I did not have to use “sudo xhost+” on my host.

There are still some errors and warnings displayed in the log. These do not kill the running sphinx-server but maybe they prevent a proper functionning, I have to check.

For info, below are parts of the logs that seem to me interesting with the warnings/errors.

make run
root@sh15: ./run.sh 
[.. pretty long log....]
[Msg] created parameter server on http:8383
[Dbg] [Iio.cc:33] Creating IfIio object 'iio_simulator.sock'
[Dbg] [MachineManager.cc:448] anafi4k: Machine(name = "anafi4k", firmware = "http://plf.parrot.com/sphinx/firmwares/anafi/pc/latest/images/anafi-pc.ext2.zip")
property interface = eth1
[Msg] connected to firmwared
[...]

I firmwared_instances: init_command_line: ro_boot_console = ro.boot.console=
I firmwared_instances: OUTER_PTS is /dev/pts/1
I firmwared_instances: INNER_PTS is /dev/pts/2
I apparmor_config: apparmor_load_profile(b6f267d639d45108d940785c5b22ade587b6f288)
W firmwared_log: /usr/bin/env: 'python': No such file or directory
W firmwared_instances: invoke_post_prepare_instance_helper failed: -125
[....]
[Dbg] [MachineManager.cc:806] All machines have had their properties set
[Msg] WEB DASHBOARD IS ACCESSIBLE at http://localhost:9002
I firmwared_instances: launch_instance "/usr/share/firmwared/firmwares//anafi-pc.ext2.zip.34b8603f-cc3e-4bc2-8bbc-c4b1df6ef0ed.firmware"
W firmwared_log: modprobe: ERROR: ../libkmod/libkmod.c:586 kmod_search_moddep() could not open moddep file '/lib/modules/4.4.0-140-generic/modules.dep.bin'
W firmwared_log: modprobe: FATAL: Module ifb not found in directory /lib/modules/4.4.0-140-generic
E firmwared_instances: invoke_net_helper config returned -125
[Msg] Instance risible_stephanie[b6f267d639d45108d940785c5b22ade587b6f288] started
[Msg] All drones instantiated
W firmwared_log: Cannot find device "fd_veth0"
[.....]
[Dbg] [PompPool.cc:87] seqnum '4294967295' not found in pomp pool
[Dbg] [Firmwared.cc:183] OnInstanceDead
[Dbg] [PompPool.hh:173] PompRequest 4294967295 not found
[This is the end of the log, sphinx-server is still running.]

I will let you know about the progress.

Jeremy.

What have you done to get this far? I’m getting the same error as in your last post. apparmor_init appears not to be implemented and apparently AppArmor isn’t running.
I’ve tried running AppArmor, but I can’t find a way to run it since systemd is not available. Unlike in my manjaro system (which is the host for the docker container) “/lib/apparmor/apparmor.systemd” does not exist and cannot be run in the container.

Regards
Timo

As far as I remember the apparmor issue was solved by running :

mount -tsecurityfs securityfs /sys/kernel/security

and installing additional packages within the Dockerfile

apparmor apparmor-profiles
apparmor-utils \

Check both the Dockerfile and run.sh scripts ; Parts of the magic are in the bash script run.sh

docker/headless-nvidia-parrot-sphinx-host-display/run.sh at master · jeremyfix/docker · GitHub

The errors and warnings are now suppressed by additionnally installing python
and moutning the volume “-v /lib/modules:/lib/modules” at runtime within the Makefile;

I suppose its remains to expose a particular port (4444 ?) and it is done ?!

The errors are removed but well … I have the feeling that it is less and less in the container spirit with some host dependent mounted volumes.

Jeremy.

It is really interesting what you’ve accomplished @jeremyfix @ndessart !
I didn’t follow the container approach because it was cumbersome; I did it in a VM and it works just fine.

Well yeah, as pointed at the beginning by @ndessart, Sphinx has heavy dependencies on the host system, such as the need for a suitable FS, a manager for running firmwared and the dependency on dedicated graphics.

Nevertheless, I think that the purpose of embedding such system inside a container is that the whole dev system surrounding Sphinx is hard to setup without previous Gazebo-related knowledge. Therefore, I believe that this image could be an initial step to a fully-packaged dev environment for Parrot Sphinx.

I think this is kind of working but there are still some unsolved issues. I do report below.

1- There are still some errors with sphinx-server

On the host on which I’m running the docker container, I still get some errors with sphinx-server

[Err] [Socket.cc:174] Socket 65 hung up
pomp: recvmsg(fd=64) err=104(Connection reset by peer)
31m[Err] [Socket.cc:174] Socket 65 hung up
E pomp: recvmsg(fd=64) err=104(Connection reset by peer)

I found other threads on the forum linking this to the nvidia driver ? I thought the nvidia drivers were correctly used (as seen from glxinfo and the volume /usr/lib/nvidia-410 being mounted from the host) . There may be something else.

2- The simulation seems to be running and image published

On the other hand, I did ping the simulation by logging into the container

fix_jer@host:$ docker -it friendly_poincare /bin/bash

root@sh15:$ gz stats 
Factor[0.20] SimTime[118.91] RealTime[484.23] Paused[F]
Factor[0.23] SimTime[118.98] RealTime[484.51] Paused[F]
Factor[0.23] SimTime[119.04] RealTime[484.77] Paused[F]
Factor[0.23] SimTime[119.10] RealTime[485.04] Paused[F]
Factor[0.24] SimTime[119.18] RealTime[485.31] Paused[F]
^C

root@sh15:$ gz topic --hz /gazebo/default/bebop2/body/horizontal_camera/image                                                        
Hz:   3.90
Hz:   3.66

I did not try to grab the image from within the container though .

3 - How would I connect from a client different from the docker host ?

My last point is to find my way to connect to the simulated drone from outside the host.

On my host, I do see some new interfaces (enp4s0 being the physical one).

fix_jer@host:$ 
docker0   Link encap:Ethernet  HWaddr.....
          inet addr:172.17.0.1  Bcast:172.17.255.255  Mask:255.255.0.0

enp4s0    Link encap:Ethernet  HWaddr ........
          inet addr:192.168.10.79  Bcast:192.168.10.255  Mask:255.255.255.0

fd_veth0  Link encap:Ethernet  HWaddr .....  
          inet addr:10.202.0.254  Bcast:0.0.0.0  Mask:255.255.255.0

I can ping the drone, the IP supposedly being

fix_jer@host$ ping 10.202.0.1
PING 10.202.0.1 (10.202.0.1) 56(84) bytes of data.
64 bytes from 10.202.0.1: icmp_seq=1 ttl=64 time=0.059 ms

Now, given a client different from the host running the container, I’m not sure which traffic I should probably locally route from enp4s0 to fd_veth0.

Jeremy.

This is a symptom of a simulated firmware process crash. It may indeed be caused by a GPU driver issue. Please try disabling the simulated drone front camera.
See: https://developer.parrot.com/docs/sphinx/drone-requirements.html#using-the-front-camera.

On Ubuntu 18.04 I currently have the nvidia-390 GPU driver version from the official Ubuntu repository. Could you try with this driver instead?

This is consistent with a simulated firmware crash. Sphinx is running, the simulated drone instance/container is created and you can ping the simulated drone at 10.202.0.1 but that is pretty much all you can do with it.

Since you seem to be using docker --net=host (the fd_veth0 interface show up on your host), one solution to this could be to perform a NAT on your host system :

  1. Allocate a new IP address on your physical host Ethernet interface (enp4s0) either statically or with the help of your DHCP client. (With dhclient, have a look at man dhclient.conf > pseudo interface and then man dhclien-script).
  2. Create the following iptables rules to retoute this new IP address on your local network to the simulated drone IP address :
    # ETH_SECONDARY_IP_ADDR=192.168.1.XXX (your secondary IP address on your local network)
    iptables -w -t nat -A PREROUTING -d $ETH_SECONDARY_IP_ADDR -j DNAT --to-destination 10.202.0.1
    iptables -w -t nat -A POSTROUTING -d 10.202.0.1 -j MASQUERADE
    

Update: alternatively you can use the sphinx port forwarding option.
See :
https://developer.parrot.com/docs/sphinx/connectdrone.html#virtual-ethernet

2 Likes

Thank you for these detailed information.

I can bring in additional information regarding the “socket hung up” ; Still using the nvidia-410 driver (this is actually the host driver and it is unclear to me how I could be using a different driver version within the container (maybe nvidia-driver) and did not try yet to downgrade the host’s driver)

in one word :

  • sphinx works with anafi4k and front_cam : no errors raised by sphinx and the horizontal_camera topic gets published at 30 Hz; Here are the logs
  • sphinx fails with bebop2. The socket hung up error is raised by sphinx-server, even if the front_cam is disabled. Here are the logs

I’ll try to look for firmwared logs ;

Jeremy.

What is the distribution running on the host? What is the distribution running inside the container ? For both, I would recommend to use an Ubuntu 16.04 because that is what we currently use in our CI environment. I know, that would probably defeat the reason you are using docker in the first place but it would at least confirm my suspicions…

From what you are saying, I guess that the nvidia driver librairies you are using from your host are binary incompatible with the simulated firmware libc. This binary compatibility issue is the reason why sphinx is so restrictive in its linux distrib system requirements. You seem to be using a very recent distrib release (or even a rolling distrib) or an nvidia driver ppa.

The binary incompatibility comes from the simutated firmware itself. It’s possible that the bebop2 firmware you are using is incompatible with your host system libc while the more recent anafi firmware still is binary compatible (because it has been built with a more recent libc).

We would really like to have a less fragile dependency (host driver libraries and libc) but I don’t think we want (or even can) distribute/install the right nvidia driver kernel module and librairies compatible with each simulated firmware.

Both the host and the docker container have a

 cat /etc/lsb-release
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=16.04
DISTRIB_CODENAME=xenial
DISTRIB_DESCRIPTION="Ubuntu 16.04.5 LTS"

Maybe you can indicate me which libc version you are using ? I could then maybe downgrade the one installed in the docker image. In the docker image I freshly created :

root@dockerimage:/gazebojscam# dpkg -l | grep libc
ii  libc-bin                               2.23-0ubuntu10                        amd64        GNU C Library: Binaries
ii  libc-dev-bin                           2.23-0ubuntu11                        amd64        GNU C Library: Development binaries
ii  libc6:amd64                            2.23-0ubuntu11                        amd64        GNU C Library: Shared libraries
ii  libc6:i386                             2.23-0ubuntu11                        i386         GNU C Library: Shared libraries
ii  libc6-dev:amd64                        2.23-0ubuntu11                        amd64        GNU C Library: Development Libraries and Header Files

(I’m actually trying to launch sphinx within a container to avoid invading the system with several dependencies and I was initially expecting not to be too much dependent on the host version, host volumes, etc…; for example still being able to run sphinx even if the host would get upgraded to, say, 18.04. But for now I would still be happy running the 16.04 container on the 16.04 host)

By the way, I was able to grab an image from the make run-anafi4k command issued from the scripts at [1] , from the /gazebo/default/anafi4k/gimbal_2/horizontal_camera/image topic (I actually used a probably too long way to reach that goal, installing gazebojs on the way… there might be simpler path ? :slight_smile: ) . I suppose that confirms it works with anafi4k.

The following was displayed in the sphinx-server tab.

[Msg] Selected params to build .sdf file for anafi4k:
        param low_gpu = 0
        param product_pro = 0
        param sdcard_serial = __undefined__
        param simple_front_cam = 1
        param with_front_cam = 1
        param with_gimbal = 1
        param with_kalamos = 0
        param with_tof = false

[1] docker/headless-nvidia-parrot-sphinx-host-display at master · jeremyfix/docker · GitHub

OK, you are running an Ubuntu 16.04 container on a 16.04 host and, except for a 4.4.0-34 linux kernel that you probably are not using, you could not be closer to our CI environment. So this does not seem to be a libc related issue after all.

There should be another explanation.

This doesn’t mean that anafi is running without any issue. Have you tried to connect to a simulated anafi with FreeFlight6 or Olympe? Do you get any video stream from the drone?

I’ve just finished reading again this topic in case I would have missed something and…

The real time factor seems to be a little off. With a GTX 1060 Ti you should really be in the range [0.60; 0.80] for the real time factor when the 4K front camera is activated (with_front_cam=True::simple_front_cam=False). If the simple front cam is activated (lower resolution, simplified stabilization, with_front_cam=True::simple_front_cam=True) the real time factor should be in the range [0.80; 1.0].

Could you try installing sphinx on your 16.04 host to rule out any driver installation issue in the docker container?

I did installed and executed sphinx-server on the host. I did not try to grab any image, just to start sphinx-server and check the gz stats . Indeed, it works better on the 16.04 host than within the container running on the same host. The host has a 1080Ti.

Short logs :

  • For bebop2 :
sh15:~:fix_jer$ DISPLAY=:99 sphinx-server /opt/parrot-sphinx/usr/share/sphinx/worlds/outdoor_1.world /opt/parrot-sphinx/usr/share/sphinx/drones/bebop2.drone     
Parrot-Sphinx simulator version 1.2.1
[....]
[Msg] Selected params to build .sdf file for bebop2:
        param flir_pos = tilted
        param kalamos_clip_far = 35
        param kalamos_clip_near = 1.5
        param low_gpu = 0
        param with_flir = 0
        param with_front_cam = 1
        param with_hd_battery = 0
        param with_kalamos = false
[....]
// There are no error messages.

sh15:~:fix_jer$ gz stats
Factor[0.59] SimTime[3.78] RealTime[17.51] Paused[F]
Factor[0.66] SimTime[7.68] RealTime[23.59] Paused[F]
Factor[0.66] SimTime[7.82] RealTime[23.79] Paused[F]
Factor[0.66] SimTime[7.95] RealTime[23.99] Paused[F]
  • For anafi4k
sh15:~:fix_jer$ sphinx-server /opt/parrot-sphinx/usr/share/sphinx/worlds/outdoor_1.world /opt/parrot-sphinx/usr/share/sphinx/drones/anafi4k.drone::stolen_interface=
[...]
[Msg] Selected params to build .sdf file for anafi4k:
        param low_gpu = 0
        param product_pro = 0
        param sdcard_serial = __undefined__
        param simple_front_cam = 1
        param with_front_cam = 1
        param with_gimbal = 1
        param with_kalamos = 0
        param with_tof = false
[...]
// There are no error messages

sh15:~:fix_jer$ gz stats
Factor[0.93] SimTime[11.62] RealTime[16.44] Paused[F]
Factor[0.94] SimTime[11.81] RealTime[16.64] Paused[F]
Factor[0.93] SimTime[11.99] RealTime[16.84] Paused[F]
Factor[0.93] SimTime[12.18] RealTime[17.04] Paused[F]
Factor[0.93] SimTime[12.36] RealTime[17.24] Paused[F]
Factor[0.92] SimTime[12.55] RealTime[17.44] Paused[F]

I also add the stats for the anafi4k simple_front_cam=false

sh15:~:fix_jer$ sphinx-server /opt/parrot-sphinx/usr/share/sphinx/worlds/outdoor_1.world /opt/parrot-sphinx/usr/share/sphinx/drones/anafi4k.drone::stolen_interface=::simple_front_cam=0
[...]
[Msg] Selected params to build .sdf file for anafi4k:
        param low_gpu = 0
        param product_pro = 0
        param sdcard_serial = __undefined__
        param simple_front_cam = 0
        param with_front_cam = 1
        param with_gimbal = 1
        param with_kalamos = 0
        param with_tof = false
[...]
sh15:~:fix_jer$ gz stats
Factor[0.21] SimTime[15.53] RealTime[73.26] Paused[F]