Reinforcement Learning to control Drone

Hi,

I am currently working on implementing RL algorithms on the parrot drone by training it in Sphinx and deploying it on physical drone. We are using Olympe as the Python interface to command it (‘PCMD’) and get position feedback (‘PositionChanged’). However, the training is quite slow compared to standard RL robot problems. I would like to know if someone has worked on a similar problem and have a better solution to interface the drone control and feedback.
Will subscribing to Sphinx gazebo topics through ROS be a faster way of interface? As I would be directly communicating with the simulation. Olympe commands seem to take longer.
(I am not familiar with ROS and hoping for a solution without requiring ROS)

I haven’t found many topics using RL for the Parrot drone and would like to start a discussion.

Thanks!

1 Like

Hi @gargivaidya,

I am wondering what the real time factor your gazebo is getting while running the simulation? You can see at the bottom of the gazebo screen. Currently mine is below 1, meaning that my simulation is actually running slower than real life. This could be the reason it is taking so long. I am not sure if Sphinx allows for real time factors greater than 1.

Hi @Watchdog101,

Yes, the real time factor has always been 0.98. Yes, it does seem like the simulation is taking as much time as a physical drone would take. This would definitely help when we transfer our algorithm onto the physical drone. However, if the training simulations were faster, and by that I mean each moveBy command takes fewer milliseconds to execute, I would be able to train the drone for more timesteps of the order of 10e6 within few hours.