Fish eye correction


#41

Yes, you are correct. I assume by now you would have noticed that the trick is backpropagation. You start with undistorted points in master plane and add anomaly (i.e. distortion/mis-registration) effect using your mathematical model. Then you pick up pixel values at those distorted pixel locations and just put them in actual place you started with.

And since it’s not necessary that the distorted pixel locations are integer, you might wanna use bi-linear /bicubic interpolation method (OpenCV has support for many).

I had written some stuff based on that to make it easy for people to understand. If I found some time, I would try to extract that and post it here for other people.


#42

Hi @muzammil360,

I am using the external and internal parameters from calibration camera process in Pix4D. Are they correct to project undistorted 3D points in master plane or the sequoia distortion may affect to the previous calibration?

Best regards,

JJurado


#43

@jjurado, sorry for the late reply.

Yes. you can use external and internal parameters from calibration camera process in Pix4D. My personal opinion is that they are even more optimized (in some sense) during the 3d reconstruction process.

They should be able to project undistorted 3D points in master plane.


#44

As the application notes say explicitly, photogrammetry provides the best results!


#45

@muzammil360
I have realized that principal point is always the same for each multispectral band. Have you checked it?

GRE: 2.43165,1.879815 (mm)
NIR: 2.300533,1.70701 (mm)
RED: 2.299911,1.854195 (mm)
REG: 2.392066,1.722098 (mm)

is this point the center (u,v) of image plane?


#46

@jjurado, yes I am aware of that. Infact, this is what we should expect.

Camera manufacturer calibrates the camera after manufacturing and each and every single lens ever produced (in the factory) would have slightly different calibration coefficients. NIR and GRE are different not because they capture different bands, but because they have two lenses. Each lens ever produced in factory is unique in this sense and needs individual calibration for scientific measurements. Similar should be the case of rig relatives. You will find that no two cameras or even two bands of same camera have exactly same internal and external camera parameters.

No this is not the center of image plane. These are the coordinates of point on imaging film which is exactly behind the lens center. In other words, if a light ray coming exactly perpendicular to lens strikes the lens at its center, it will go un-deviated and shall strike at this point (known as principal point) on the imaging film. In an exact system, it should have been width/2, height/2. But due to limitations of manufacturing processes, there is always an error which is determined by camera calibration process and then compensated for using mathematical model.

PS: Don’t get intimidated by the jargon above. I am sure you will understand it in a few weeks if you don’t completely understand now.

Fell free to communicate if you have any more questions.


#47

Hi @muzammil360,

I am very close to resolve the conversion of 3D points to 2D coordinates. I have coded in c++ the distortion equation but I have some questions. In your code, you used this formula:

theta = (2 / pi) * atan®;

I think that it must be:

theta = (2 / pi) * atan(r/Z);

*where Z is the third component of 3D coordinate.

Another question is linked to Affine Matrix. Have I to transform their values for the following formula?

Xd = C*Xh + D*Yh + PPX;
Yd = E*Xh + F*Yh + PPY;

Now, I show an image which contains pixels projected to any 3D point. I do not understand why no appear any projected pixel in the left… I consider that the tranformation (mm to pixel) of principal point is rigth:
cx: 2.43165 mm -> cx: 648.44 px

Best regards,

Jjurado


#48

I think I have already normalized it to z=1. I don’t remember myself as it’s been sometime now but again read my code for GetProjectionOnDistortedSpace(). There is a comment.
image

Perhaps this answers your question.



I don’t understand what you mean by transforming their values. Transform which value to what?



Yes, this seems correct.


#49

Hi everyone!

I am a novice at python programming and @muzammil360 I’m trying to follow the code you shared (matlab?). I’ve managed to obtain most parameters being passed into GetProjectionOnDistortedSpace() but for P. Could you point me in the right direction to obtain P please? How do I produce an undistorted image afterwards?

I’d be glad if someone could share their code in python please (working or not)?

Many thanks

My half baked python code below if someone could help:

import cv2
import numpy as np
import exiftool
import math
import sys
import numpy.linalg as lin

def GetProjectionOnDistortedSpace(P, distCoef, affineMat, camMatK, cx, cy):
p0 = distCoef[0]
p1 = distCoef[1]
p2 = distCoef[2]
p3 = distCoef[3]

p = lin.solve(camMatK.T.dot(camMatK),camMatK.T.dot(P)) #camMatK\P ->left matrix division or inv(camMatK)*P

C = affineMat[0]
D = affineMat[1]
E = affineMat[2]
F = affineMat[3]

return Pdistorted

def correct_Fisheyedistortion(meta, image):
#get the two principal points
pp = np.array(meta.get_item(‘XMP:PrincipalPoint’).split(’,’)).astype(np.float)
# values in pp are in [mm] and need to be rescaled to pixels
FocalPlaneXResolution = float(meta.get_item(‘EXIF:FocalPlaneXResolution’))
FocalPlaneYResolution = float(meta.get_item(‘EXIF:FocalPlaneYResolution’))

cx = pp[0] * FocalPlaneXResolution
cy = pp[1] * FocalPlaneYResolution

fx = FocalPlaneXResolution * float(meta.get_item('XMP:FocalLength'))
fy = FocalPlaneYResolution * float(meta.get_item('XMP:FocalLength'))

# Is this how you obtain P?
h, w = image.shape
x = np.arange(0, w, 1)
y = np.arange(0, h, 1)

rows = []
for iy in y:
    for ix in x:
        rows.append([ix,iy])

P = np.array(rows)
    

# set up camera matrix for undistortfisheye
camMatK = np.zeros((3, 3))
camMatK[0, 0] = fx
camMatK[1, 1] = fy
camMatK[2, 2] = 1.0
camMatK[0, 2] = cx
camMatK[1, 2] = cy

distCoef = meta.get_item('XMP:FisheyePolynomial').split(',')
affineMat = meta.get_item('XMP:FisheyeAffineMatrix').split(',')

Pdistorted = GetProjectionOnDistortedSpace(P, distCoef, affineMat, camMatK, cx, cy)

undistorted_img = cv2.remap(image, P, Pdistorted, interpolation=cv2.INTER_LINEAR, borderMode=cv2.BORDER_CONSTANT)
return undistorted_img

#50

@ssel, the code you posted here is perhaps incomplete. I looked at the code you sent me in personal message. That seems to be a conversion of Matlab code to python code.

I can’ really debug your code but a possible issue might be with the units (and therefore numerical values) of your K matrix. Plus, in my function, I use principal point in pixel format, and I am not sure if that’s the case from your code.

Best option would be to debug both codes (by inspecting variables) side by side and see where the values in corresponding variables change.

PS: I am trying to upload some matlab code for complete registration. I just hope I get time for that.


#51

Hi all, following is a MATLAB code that can undistort and register Sequoia images. I hope this will be helpful in multispectral research.


#52

Nice job @muzammil360

You rock :sunglasses::100: