Sphinx simulator minimum/recommended hardware requirements

If I read here System requirements - 2.8.2 about the minimum/recommended hardware requirements for a Sphinx simulator, than I’m about to make the Homer, saying “Dough…”

Do you have some sample machines, which would be sufficient (links)? Would it be possible to install the simulator into a cloud instance and make it available for remote clients?

Also nice this statement:

Note
You must use up-to-date Nvidia proprietary drivers instead of the generic drivers of your distribution.

Well, at least you can say to yourself: We have documented it. The meaning is nebulous…

Hi

Any Desktop PC with an nvidia RTX 2070 and a decent CPU (intel i7, i9, amd ryzen 5, … ) should work fine. Giving you a direct link is not possible (we would have to test this particular reference, ensure that it’s available in your country, maintain this topic to ensure that the link does not become dead…).

No, it’s currently not possible to install Sphinx on a cloud instance. The simulator relies on 3D hardward acceleration that is currently only available on Linux (with the appropriate drivers…) with gamers graphic cards (nvidia geforce RTX specifically).
Installing Sphinx in a virtual machine running on a desktop PC is not possible because nvidia drivers don’t support GPU passthrough for gamers graphic cards (it’s available for professionnal GPU only…).

Cool. Very interesting.

I was thinking about if it would be possible to install it on a Jetson AGX, but since these boxes are insanely expensive nowadays I put this idea aside. Did you hear about such a setup already?

I’m having a simple additional question. I have tried to expose the Sphinx installation on a Linux notebook to others in the same network to no avail. The Sphinx is available on the machine, on which it is installed, locally at 10.202.0.254 and I can reach the simulated drone at 10.202.0.1. But I would like to reach it from another machine which is available on another network (which is also available in the Linux box). I know this is not directly related to neither Olympe nor Spinx, but my Py scripts need to run on a Raspberry PI for several reasons and the simulator should be anywhere in the same net.

Sphinx is available for Desktop Linux PC (x64 CPU).
Nvidia Jetson runs with an aarch64 CPU (ARM v8 64 bits) and Sphinx does not support this CPU architecture.

Right, have overseen this…

Regarding the “bridging” or better “forwarding” I would like to ask for a clarification:

In this document About local controllers - 2.8.2 you state:

Parrot Sphinx creates a virtual Ethernet interface on the host side as well as in the simulated drone. On the host side, the interface is generally called fd_veth0 and has the IP address 10.202.0.254. On the drone side, it is called eth1 and gets the IP address 10.202.0.1.

To use the virtual Ethernet link, the controller application should run on the host so that the local interface is accessible. If you really need to run the controller application from another machine belonging to the same IP network, you need to activate the port forwarding mode, by using the following machine parameter.

$ sphinx <my.drone>::remote_ctrl_ip=<remote_ip_addr>

And furthermore here Configuration at launch time - 2.8.2

remote_ctrl_ip

If remote_ctrl_ip is left blank (default), the port forwarding mode is not activated. Otherwise, it expects a valid IPv4 address, pointing to the host that is going to run the controller application.

From the first statement I suppose, that with “controller application” is meant the application using the Olympe Python SDK and trying to connect to 10.202.0.1. Is that correct?

With this assumption in mind I started the Sphinx on my Linux notebook like so, providing the IP of another ETH interface:

sphinx /opt/parrot-sphinx/usr/share/sphinx/drones/anafi4k.drone::stolen_interface="" /opt/parrot-sphinx/usr/share/sphinx/drones/anafi4k.drone::remote_ctrl_ip=192.168.188.27

in the hope to be able now to run a “controller app” on my PI and connect to the simulated drone somehow magically using the 192.168.188.27 IP.

The result was a repeating log entry:


[Err] [Iio.cc:602] device ultrasound already created!!
[Err] [IioObject.cc:30] EXCEPTION: Cannot attach iio object

[Err] [IioObject.cc:30] EXCEPTION: Cannot attach iio object

[Err] [Model.cc:806] Sensors failed to initialize when loading model[anafi4k] via the factory mechanism.Plugins for the model will not be loaded.
[Msg] CleanupInstances
[Msg] Instance anafi4k[4a19f51daa2791fa8481542908bb795db3ce742d] dropped
[Msg] Instance anafi4k[92477b6084d7daa7fa33e845ee8b45eea5f2be3d] dropped
[Err] [ConnectionManager.cc:555] Not initialized
[Err] [ConnectionManager.cc:555] Not initialized
[Err] [ConnectionManager.cc:555] Not initialized
[Err] [ConnectionManager.cc:555] Not initialized
[Err] [ConnectionManager.cc:555] Not initialized
[Err] [ConnectionManager.cc:555] Not initialized
[Err] [ConnectionManager.cc:555] Not initialized
[Err] [ConnectionManager.cc:555] Not initialized
[Err] [ConnectionManager.cc:555] Not initialized
[Err] [ConnectionManager.cc:555] Not initialized
[Err] [ConnectionManager.cc:555] Not initialized

which finally end up in an unhandled exception and a crash:

[Err] [Node.cc:106] No namespace found
[Err] [ConnectionManager.cc:604] ConnectionManager is not initialized
[Err] [ObstacleDetectorPlugin.cc:42] EXCEPTION: ObstacleDetector sensor parent is not a Link.

terminate called after throwing an instance of 'gazebo::common::Exception'
*** Aborted
Register dump:

 RAX: 0000000000000000   RBX: 00007f53b124f3c0   RCX: 00007f53b0416e87
 RDX: 0000000000000000   RSI: 00007ffcfe7038b0   RDI: 0000000000000002
 RBP: 00007f53b07c4840   R8 : 0000000000000000   R9 : 00007ffcfe7038b0
 R10: 0000000000000008   R11: 0000000000000246   R12: 000055bb0cc52600
 R13: 00007ffcfe703c60   R14: 0000000000000002   R15: 000055bb0c9c87a0
 RSP: 00007ffcfe7038b0

 RIP: 00007f53b0416e87   EFLAGS: 00000246

 CS: 0033   FS: 0000   GS: 0000

 Trap: 00000000   Error: 00000000   OldMask: 00000000   CR2: 00000000

Where is the error?

Full log --log-level=dbg.

log.txt.pdf (98.4 KB)

If I configure the remote_control_ip in /opt/parrot-sphinx/usr/share/sphinx/drones/anafi4k.drone then there is no crash. But now I need to figure out, how to connect from remote. What port is it?

The remote_control_ip at least seems to be recognized:

[Dbg] [Machine.cc:1641] CreateCommandLineboxinit cmdline = LD_PRELOAD=librt.so:libc.so.6:libpthread.so.0:libstdc++.so.6 /sbin/boxinit ro.boot.console=/pts/3 ro.hardware=anafi4k ro.debuggable=0 ro.revision=0 ro.debug_ipaddr=10.202.0.1 wifid.bridge=0 ro.simulator.kernel.version=5.4.0-113-generic ro.simulator.kernel.build=#127~18.04.1-Ubuntu_SMP_Wed_May_18_15:40:23_UTC_2022 ro.simulator.host.arch=x86_64 ro.simulator.host.ip0=192.168.188.27 ro.simulator.host.mac0=00:e0:4c:68:02:13 ro.simulator.host.ip1=192.168.188.99 ro.simulator.host.mac1=74:e5:f9:18:e8:2c ro.simulator.host.ip2=172.17.0.1 ro.simulator.host.mac2=02:42:f6:96:be:7c ro.simulator.gpu.hardware=UHD_Graphics_620 ro.simulator.gpu.driver=driver=i915_latency=0 ro.simulator.gpu.nvidia.info=UNDEFINED ro.simulator.host.name=ubuntu ro.simulator.os.version=Ubuntu_18.04.6_LTS ro.simulator.sphinx.version=1.8 ro.with-front-cam=1 ro.simple-front-cam=1 ro.gimbald=1 ro.storage.sphinx_pattern=undefined persist.product.pro=0 ro.flir=-1

It seems that you are using Parrot Sphinx 1.8 instead of 2.8.2. The documentation at About local controllers - 2.8.2 is for 2.8.2.

Oh, really? I was following the sphinx installation process documented here. My OS is Ubuntu 18.04:

https://developer.parrot.com/docs/sphinx/installation.html

But you are right, the version printed is 1.8…

Strange…

EDIT: Double checked. I think I was NOT following this procedure, since I don’t have the required credentials.

I got a gist from a colleague, which worked w/o the creds… That might explain it.

To be postponed :slight_smile:

Quick question: Is 2.8 available for 18.04?

Sphinx 2.8.2 is available on 18.04.
You probably still have the old plf.parrot.com server in your list of apt sources:

grep -rn plf.parrot.com /etc/apt/sources.list.d

Exactly

I think I (we) have to apply for credentials and run the installation again (after a purge). I’m just interested to see, if I can make that run on an “ordinary” I5 8 core Linux box remotely, before going and purchase a gaming PC

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.