Calibrate camera


#1

Hi everybody,
I would like to use rtabmap with SLAM dunk Parrot, but i need to calibrate my camera.
I want calibrate with “camera_calibration” ROS package but it try to access at service “set_camera_info” which doesn’t exist. Have you got an idea ?
Thank you


#2

Hi Genjou,
I don’t know how you could do that but I know I’m using rtabmap without calibrating my camera.
I don’t use rtabmap’s visual odometry, I feed it the tf computed by the slamdunk and rgb and depth topics.


#3

Thank you for your answer.
I would like to use “stereo mapping” with rtabmap_ros. To begin, I need to use stereo_image_proc who gives the two rectified camera ( SLAM dunk only gives left_rgb_rect).
For it, I need to calibrate my camera, i try to use “camera_calibration” package but SLAM doesn’t provide service “set_camera_info”. Do you have an idea ?
Genjou


#4

I did try to use the stereo with rtabmap… then I found out about this and quickly returned to using RGBD…

There is one file containing the calibration information for the two cameras in the /etc/kalamos as described in the doc.
http://developer.parrot.com/docs/slamdunk/#stereo-calibration
It might be enough info to do the job.

But I’m not sure if it’s a great to perform stereo processing since there is no way at the moment to stop the slamdunk doing it: It’s active as soon as you access one of the sensors (including the cameras). So I believe it might be better to use the processing that is done anyway.


#5

Hi,
Have you successfully calibrated your Parrot camera?
I have the same problem as you. I also want to use rtabmap. You did it?
Best regards!


#6

Hi,
I have not calibrated my camera and I don’t believe @Genjou did either.

But you don’t have to to use rtabmap, I made It, @Genjou and @Nico too.
The way we did it was using the map transform published on the /tf topic when the “SLAM” mode is active (when one of the topics like /pose,/quality or /keyframe is subscribed).
You can tell rtabmap not to use it’s own visual odometry but to use odometry supplied by an external source. And you can tell it to listen to a particular tf frame instead of an odom topic.

With this setup you don’t have to calibrate your cameras since you don’t have to use the stereo odometry module.

By the way, you can compute the visual odometry using the RGBD info from the slamdunk cameras and that way you don’t have to calibrate either.