Reading models poses into ROS

Hello there,

I am using Sphinx to record Anafi / Bebop 2’s trajectory while interacting with pedestrians. To this end, I am willing to record the pose of each one (drone + pedestrians) at each moment.

As far as I know, this can be done with the tlm-data-logger tool, which makes uses of the Omniscient plugin. Even so, I want to pair this logging with a ROS system, as I’m also using bebop_autonomy.

Therefore, what could I do pipe the logging info into ROS?

Thanks in advance!

espetro

Update: I think I found two possible answers.

  1. The least comfortable is to call tlm-data-logger from the C/Python system API and parse its output, as explained here.
  2. The easiest one is to pipe its output to a ros topic by using rostopic pub, as done here.

If you found anything better, please post it!

Please, consider also this one

Hi @espetro! I am able to access all the drone state variable. Me and my colleagues, at the University of Sannio in Benevento, are working on Sphinx, in particular with the Parrot Bebop 2, in the development of a complete software platform to analyze the behavior assumed by the drone in an environment very close to reality (e.g., such as Gazebo), understanding and fixing, before tests in the real world, possible implementation problems and/or in the control algorithm (this is commonly tested on simplified models by using Matlab/Simulink).

You can find the code developed up to this point: https://github.com/gsilano/BebopS. While the code involving Sphinx, on which we are working is present in the branch dev/Sphinx. Please note that its features are not guaranteed.

Hello @giuseppe.silano,

Thanks for the reply :slight_smile: however, I am not interested in getting data about the drone; to this end I can use Olympe or bebop_autonomy to send instructions to the Anafi / Bebop2 drone. What I am looking for is that, if other models aside from the drone are running in the same environment, i.e. pedestrians or cars, I can process this data directly in a “comfortable” way, i.e. not in a console output but in the code (preferably Python).

Does BebopS allow to read the model’s data when these models are not drones?

No, BebopS has never been tested with multiple drones or robots in Gazebo so I can’t tell you how. You should, most likely, change the environment by adding virtual odometric sensors. Perhaps someone else on the forum has already addressed this problem and can help you find a solution.