Some basic autonomous navigation of jumping sumo


#1

Product: [Jumping Sumo]
Product version: [X.X.X]
SDK version: [N/A]
Use of libARController: NO]
SDK platform: [N/A]
Reproductible with the official app: [N/A]

Hi, so, I finally got around (about 3 or so months after I had something I could show) to writing up my experiments and tests with trying to do some “autonomous navigation” of the jumping sumo. In short, I have the JS send picts to another computer via wifi and have that computer analyze the pict and the send back navigation … see & please comment:

http://www.perlish.org/~pth/papers/


#2

Interesting read.
Thanks for sharing!
A few remarks:

As you hint, it seems it is impossible to run a third-party program onboard:
the Parrot controller (Dragon) locks, even for reading, all the devices from other processes.
This includes the camera, the drivers, the LEDS, and so on.

Concerning the HSV filtering, it seems your filtering is only base on Hue,
hence your difficulties with “background blues”.
Maybe you could have further filtered using high Saturation and Value values.

Concerning the FTP use, you could have avoided some pain and
relied on the real-time stream, as long as you send some commands regularly
(a few times per second). But the FTP solution does not require video streaming,
and so saves battery.

Concerning the navigation itself, your strategy is straightforward:
three possible directions (left, straight, right),
of which the best is chosen by weighted voting.

Most important, your whole pipeline seems to work! Congrats :slight_smile:

You could improve the reactivity by considering all the combined
(linear, angular) speeds, which result in trajectories shaped as arcs of circles,
and find the one that matches the best the blue pixels.
This, however, is much more complicated,
as it requires:

  • the calibration of the speed commands in the metric space,
  • the reprojection of these arcs into the camera frame.

I also investigate the integration of the Sumo in more complex systems, and I stumbled upon
similar problems to yours.
I confirm that adding an external camera makes the problem much easier:
you get 3D localization almost out of the box using ROS/OpenCV,
as long as there is visual contrast between your robot and the floor.
If you are curious (functional but undocumented, work in progress):

Cheers,