Fish eye correction


#21

I’ve got one more couple of question about Parrot exif / fisheye spec %)
3. Question about Invalid Pixel tag format. For example, RED images from aforementioned Sequoia Sample Data have ‘99,168,99,168,299,183’ values. Is it set of (row, column) pairs?
4. Question about focal length. Here is pointed f == 2C / pi,
I’ve computed f via ‘FocalLength’ * ‘FocalPlaneXResolution’ and this result didn’t match result from f == 2C / pi formula.
How should I compute focal length?


#22

It would be better if you put in numbers. That would make things easy.


#23

Surely.
a) ‘Fisheye Affine Matrix’ tag has 1558.553158612,0,0,1558.553158612 value and it corresponds to (1558.553158612 * 2 / pi ) == 992.2058 focal length.
b) ‘Focal Length’ and ‘Focal Plane X Resolution’ have ‘4.0 mm’ and ‘266.6666559’ values, so focal length formula gives 1066.6667 result (4 * 266.6666559)
So the difference is about 7.5%.
All values are in pixels, you can easily convert it to mm via, for example, values in faq


#24

@nain, you are right. This doesn’t really make sense. I think its a better idea to directly stick with the values in Fisheye Affine Matrix because they might be experimentally computed (by calibration).

It is quite possible that the difference we are getting is because of manufacturing defects because ever lens is far from perfection. So we should be using experimentally determined calibration values.

And I don’t see any direct use of f here so may be you can stop worrying for that. :slight_smile:


#25

Gentle up. Hope clement.failet 'll attend this


#26

Hi, I am trying to correct fisheye distortion using OpenCV(3.3) fisheye undistortion function.
https://docs.opencv.org/trunk/db/d58/group__calib3d__fisheye.html#ga0d37b45f780b32f63ed19c21aa9fd333

I calculated the camera matrix using the following values.

f=2*C / pi (where C is a value in “Fisheye Affine Matrix” )
(cx, cy)=(image_width/2, image_height/2)

And I used the “Fisheye Polynomial” values for distortion coefficients.
However, fisheye distortion was not removed from the image.

Does anyone know how to remove fisheye distortion using OpenCV?
Thank you.


#27

Hi,
OpenCV uses distortion model different from ‘correct’ (from accepted answer). You can implement “correct” logic for computing maps for cv::remap, this should work.


#28

@kodiak, I would second @nain here. OpenCV uses two types of distortion models:

  • projective
  • fisheye

But the fisheye model used in openCV is different from the ones Sequoia parameters are associated with. Therefore openCV doesn’t work out of the box. However, you can implement Sequoia fisheye model in openCV and if possible may also contribute it to the openCV repo so other people may use in future.


#29

@muzammil360 and @nain , thank you very much for indicating me the way to go.
I will try to implement the source code, although it seems difficult.
If I succeeded in the implementation, I will follow your(@muzammil360) advice!

Thank you.


#30

Hi everyone!
I have a doubt, @seanmcleod, about what you said:

I’m not an expert at all so I’m a little bit confused about this. You said it’s correct to assume that Z’ = 1 m in any point of the image, but I think Z’ is in the camera coordinate system, right? Z’ is not Z, so it is not in the world coordinate system. Z’ is more like the “depth” of the real point respect our camera projection center, isn’t it?

Thank you in advance.

Edited 1 day later:
Well, I suppose I don’t understand those equations properly because I’ve just corrected some images from Sequoia and Z’=1 m works fine, as @seanmcleod told. I used OpenCV with cv::remap, as @muzammil360 and @nain proposed, and it undistorts fisheye sensor correctly.
Just one more question: Undistorting images like that assumes that image metadata is correct and so the focal length and the principal point that it shows. Are these parameters constant in the metadata or do they change over time?


#31

@HgScId, all the camera intrinsic parameters are determined by manufacturing process. So you can safely assume that they won’t change over time (unless you are developing something for NASA :slight_smile: ). Principal point, focal length and pixel size, all are camera intrinsic params.


#32

Thank you for your reply, @muzammil360.
So I assume those metadata parameters are constant in the photos, aren’t they? In a year’s time, the values in another photo will be exactly the same and they won’t have been corrected with the possible deformations that the lens suffers over time, right?


#33

I am currently trying to implement this in Matlab according to the Pix4D model but am having some issues. Would anyone be willing to share their solution to double check (Python etc. fine too)?


#34

I’m also implementing this, but in Python. My results disagree with Pix4Ds. Would anyone be will to share their code? Any language is fine.


#35

Hi,

can you share the code?

I`m trying to do it but I’m a little confused.

Thanks,

JP


#36

@HgScId, yes. The parameters are calculated by manufacturer in laboratory using some camera calibration technique. And I am afraid these parameters don’t carry any information related to whatsoever possible deformations/changes over time.


#37

@exeLab, @SethMerickel this might be helpful

function PDistorted = GetProjectionOnDistortedSpace(P,DistCoff,AffineMat,CameraMatK,PPX,PPY)
% function PDistorted = GetProjectionOnDistortedSpace(P,DistCoff,AffineMat,CameraMatK,PPX,PPY)
% This function projects points from linear space to distored space
% INPUTS
% P:                points in linear space
% DistCof:          fisheye distortion coefficients 
% AffineMat:        fisheye affine matrix
% CameraMatK:       camera matrix K
% PPX:              principal point x-axis (pixel)
% PPY:              principal point y-axis (pixel)
%
% OUTPUTS
% PDistorted:       points projected on distorted space

p0 = DistCoff(1);
p1 = DistCoff(2);
p2 = DistCoff(3);
p3 = DistCoff(4);

% convert to normalized real world coordinates (z=1);
p = CameraMatK\P;            % p=inv(K)*P;

X = p(1,:);
Y = p(2,:);

X2 = X.^2;
Y2 = Y.^2;

sum = X2+Y2;

r = sqrt(sum);

theta = (2 / pi) * atan(r);

row = p0 + p1*theta + p2*(theta.^2) + p3*(theta.^3);

tmp = row./r;

Xh = X.*tmp;
Yh = Y.*tmp;

C = AffineMat(1);
D = AffineMat(3);
E = AffineMat(2);
F = AffineMat(4);

Xd = C*Xh + D*Yh + PPX;
Yd = E*Xh + F*Yh + PPY;

PDistorted = [Xd;
    Yd;
    ones(size(Xd))];

end

#38

Hi @muzammil360 I come back again to work in this research line again. I have read your last code and I want to ask you some questions. I am using the extrinsic parameters provided by the reconstruction of Pix4D software and I must calculate the intrinsic parameters (fx,fy) and (cx,cy). My goal is to project each pixel to 3D model. For this, I have coded projectPoints() method using fisheye model (OpenCV 3.3).

  1. Is projectPoints function fine to make the projection?
  2. are fx and fy directly obtained through Exif.Photo.FocalPlaneXResolution and Exif.Photo.FocalPlaneYResolution tags?
  3. How have you calculated cx and cy? I use Xmp.Camera.PrincipalPoint tag is it right?

Best regards,

JJurado.


#39

Hi @jjurado, sorry for the late reply. Following is my response.

As far as I remember, OpenCV was using a slightly different model than the one used by Parrot. I would recommend you to look into Sequoia model and OpenCV model and decide if they are same or not. My above function GetProjectionOnDistortedSpace() models the Sequoia fisheye model.

FocalPlaneXResolution = ImagePlaneWidth_pixel/ImagePlaneWidth_mm = (1280/4.8) = 266.67
In exif focal lenght = 4.0mm
I use the following formula to convert mm to pixel units. So,
fx = FocalLength_mm * FocalPlaneXResolution

Principal point information in exif is in mm format. I simply do following to get principal point in pixels

PP_pix = PP_mm * PixelRes
Where PixelRes = ImagePlaneWidth_pixel/ImagePlaneWidth_mm = (1280/4.8)
Note: Numbers are for sequoia only.
PixelRes is simply the FocalPlaneResolution. Here, I assume that it’s same in both X and Y directions.