Vignetting Correction Sample Code


#1

Based on the documentation supplied by Parrot in their application note on vignetting correction:

SEQ AN 02 - How to correct vignetting in images.pdf

Here is some sample Python code which will apply vignetting correction to an input image.

As an example this is the vignetting polynomial data from one of our sensors.

polynomial2DName = '0,0,1,0,2,0,3,0,4,0,0,1,0,2,0,3,0,4,1,1,1,2,1,3,2,1,2,2,3,1'
polynomial2D = '0.7147839124238495,1.0868966926036208,-1.9469658650960013,1.7594973911304057,-0.8592142982034272,0.4226179922616180,-0.5406863759565294,0.2053832247808616,-0.1377541142380952,-0.7764416022112121,0.8026039825428949,0.0564412856216702,0.7468816658602367,-0.9151781384546372,0.0716926129388081'

Which equates to:

n,m	Coefficient
0,0	0.7147839124238495
1,0	1.0868966926036208
2,0	-1.9469658650960013	
3,0	1.7594973911304057
4,0	-0.8592142982034272

0,1	0.4226179922616180
0,2	-0.5406863759565294
0,3	0.2053832247808616
0,4	-0.1377541142380952

1,1	-0.7764416022112121
1,2	0.8026039825428949
1,3	0.0564412856216702

2,1	0.7468816658602367
2,2	-0.9151781384546372

3,1	0.0716926129388081

And to get a rough idea on the scale of the vignetting effect here is a 5x5 grid of equal spacing, i.e. x and y both go from 0 to 1 in increments of 0.25.

0.71, 0.89, 0.94, 0.91, 0.75
0.79, 0.94, 0.98, 0.95, 0.83
0.81, 0.95, 0.98, 0.96, 0.85
0.77, 0.92, 0.96, 0.93, 0.80
0.66, 0.85, 0.90, 0.85, 0.69

So there is a roughly 30% drop off in the corners.

Source code:

import sys
import math
import numpy as np
import gdal
from gdalconst import *
import exiftool

polynomial2DName_tag = 'XMP:VignettingPolynomial2DName'
polynomial2D_tag = 'XMP:VignettingPolynomial2D'

tags = [ polynomial2DName_tag, polynomial2D_tag ]

def build_powers_coefficients(powers, coefficients):
    '''
    :return: List of tuples of the form (n, m, coefficient)
    '''
    powers_coefficients = []

    power_items = powers.split(',')
    coefficient_items = coefficients.split(',')

    for i in range(0, len(power_items), 2):
        powers_coefficients.append((int(power_items[i]), int(power_items[i+1]), float(coefficient_items[int(i/2)])))

    return powers_coefficients

def vignetting(powers_coefficients, x, y):
    value = 0.0

    for entry in powers_coefficients:
        value = value + entry[2] * math.pow(x, entry[0]) * math.pow(y, entry[1])

    return value

def vignette_correction(input_filename, output_filename):
    
    with exiftool.ExifTool() as et:
        metadata = et.get_tags(tags, input_filename)

    polynomial2DName = metadata[polynomial2DName_tag]
    polynomial2D = metadata[polynomial2D_tag]

    poly = build_powers_coefficients(polynomial2DName, polynomial2D)

    input_dataset = gdal.Open(input_filename, GA_ReadOnly)
    image_data = input_dataset.ReadAsArray()
    (rows, cols) = image_data.shape

    vignette_factor = np.ones((rows, cols), dtype=np.float32)

    for y in range(0, rows):
        for x in range(0, cols):
            vignette_factor[y, x] = vignetting(poly, x/cols, y/rows)

    image_data = image_data / vignette_factor

    driver = gdal.GetDriverByName('GTiff')
    output_dataset = driver.Create(output_filename, cols, rows, 1, gdal.GDT_Float32, ['COMPRESS=LZW'])
    output_dataset.GetRasterBand(1).WriteArray(image_data)


def test_grid(poly2DName, poly2D):
    poly = build_powers_coefficients(poly2DName, poly2D)

    print(poly)

    for y in np.arange(0, 1.25, 0.25):
        line = []
        for x in np.arange(0, 1.25, 0.25):
            line.append(vignetting(poly, x, y))
        print('{:0.2f}, {:0.2f}, {:0.2f}, {:0.2f}, {:0.2f}'.format(line[0], line[1], line[2], line[3], line[4]))


if __name__ == '__main__':
    if len(sys.argv) < 3:
        print('Usage: vignetting.py input_file output_file')
        sys.exit(-1)
    else:
        vignette_correction(sys.argv[1], sys.argv[2])

Ambiguity with Vignette Model
Red Band Problem
#2

@clement.fallet as pointed out by @muzammil360 in another post the application note for vignetting correction doesn’t actually specify the location of the origin. In my sample code above I’ve assumed the origin to be the top left pixel of the image, but that was simply my assumption.

Could you please update the application note to make it clear?


#3

@clement.fallet, beside the location of starting pixel, please explain if the first pixel is addressed as (0,0) or (1,1). This difference though seems little, but does change the value of vignette correction matrix thus produced. Therefore a clarification is required.


#4

The coordinate system we state is a standard photogrammetry coordinate system.
You could easily use another coordinate system with only minimal changes.
For the vignetting, (0,0) is the top left corner of the image.


#6

I think there is problem in script : formula is not good in line 55 :
image_data = image_data / vignette_factor
So my question is why is it a division ? It’s giving odd results !
I modified to a multiply like this :
image_data = image_data * vignette_factor

Here are sample output :

It seems clear that formula is not good !

A picture about vignetting from picture above :


Original script will add light on corner whereas modified script will underexpose and give correct results


#7

No the division is correct. Take a look at Parrot’s application note:

For a given pixel p, its vignetting v affects incoming irradiance i depending on its coordinates in pixels (xp, yp) as follows:

p(xp, yp) = v(xp, yp) * i(xp, yp)

The polynomial describes v(xp, yp), so you need to divide the pixel value by the value you generate from the polynomial.

And take a look at the example vignetting grid I produced above, the top left pixel for example has a value of 0.71, i.e. the vignetting effect of the lens etc. is reducing the incoming irradiance by 29%, so to recover the value without vignetting you need to divide the pixel value by the polynomial value of 0.71.

Just think about the vignetting effects you see from lenses etc. typically the corners are darker which means to remove the vignetting effect you need to brighten the corners.


#8

Hi Sean, thank you for your reply.

If I refer what I see, it’s not common vignetting where corner are usually darker in every lens at corners and vignetting correction are normaly calculated to suppress dark in corner.
Unfortunately Sequoia is not darker in corners, it’s brighter and it’s not normal vignetting but reverse vignetting like seen here : Reverse Vignetting Compensation

May be it’s normal to have reverse effect from raw data but not after correction … that’s why I think formula provided by parrot (not by you Sean) is not true ! I need time to investigate more …


#9

Hi @kikslater.

I’m travelling around Namibia on vacation with very intermittent and slow internet connections, hence some the delays in replying.

In the handful of Sequoia images I looked at, at the time I wrote the code above the images did appear to have standard vignetting effects, i.e. darker corners etc.

But I don’t have any sample Sequoia images with me on my laptop to double check.


#10

Dear @kikislater

Please see the answer I’ve just posted in the original thread.

Best


#11

Hi @kikislater,

Please note in your images that the original script brightens the top right corner of the image. This is the desired behaviour with vignetting correction, as vignetting darkens the image radially outwards from its optical axis.

Also note that your proposed script darkens that same area, rendering the vignetting even worse.

I think it might be confusing that the image looks more uniform on your proposed script as opposed to the original one. This results form the Bidirectional Reflectance Distribution Function (known as BRDF).

Best,


#12

Hi @seanmcleod,

Your post is an answer in itself.

Thanks!


#13

Hi, thanks for your code.

Im trying to use it but I have the next problem:

Any idea on what is it?

I also run a similar code in R but it took an average of 7 hours per image to complete. Is this normal?

Thanks!

JP


#14

The error means that the required tags which hold the calibration data in order to perform the vignetting correction are missing.

More than likely because you’re using an earlier version of the Sequoia firmware.

The python version takes on the order of 20s.


#15

@seanmcleod Thanks for your reply.

I’m using the last sequoia firmware (1.4.1)

When I search for the tags with Exiftool the tags are there:

Any idea on how call the tag?

Thanks!

JP


#16

I used the tags with out “Camera” and it works. But now I’m having the next problem:

Thanks again for your time.

Cheers,

JP


#17

You haven’t got gdal installed or you forgot to import gdal?


#18

Thanks! Im trying to use the code but with several images at the time. But I have the following error:

I’m calling all the images like this:

tags = [ polynomial2DName_tag, polynomial2D_tag ]
input_directory = sys.argv[1]
output_directory = sys.argv[2]

for tif in glob.glob(input_directory + '*TIF'):
    print tif

    for input_filename in input_directory :
        print input_filename
        output_filename = "v" + input_filename

I also did this:

tifs = [f for f in listdir(input_directory) if isfile(join(input_directory,f))]
tifs_str = ''
print tifs
for tif in tifs:
    input_filename = input_directory + tif
    print input_filename
    output_filename = "v" + input_filename

and did not work.

Any idea?

Thanks!

JP