Reflectance Estimation


#21

@seanmcleod, please find the image below. I had to compress it to zip to upload.

Reflectace value: 0.645;
Value of K = 0.7532 (computed);

IMG_160925_153409_0001_NIR.zip (1.5 MB)


As of many pixels from multiple photos, I understand that now.


#22

I am not really sure about it.


#23

@muzammil360 at first glance when I looked at the image you posted above it looked like there are some shadows and some parts of the image that are in direct sunlight.

Which surely may account for why some parts of the image come back with a reflectance > 1 then?


#24

@seanmcleod, it seems that only top left seems to be in shadow. The other part of image seems fine (at least to me). Do you think it may still have some problem with it?


#25

@muzammil360 I’m trying to process your sample image but there is no SensorModel exif tag in your image. Was this image taken with very old firmware?


#26

Important, this does not apply to standard Sequoia workflow.

@muzammil360, as the SensorModel tag is missing I imagine you’re doing a first order approximation count = A' exposure gain irradiance + B'. As there are two coefficients to fit, you need at least two measurements. One of those can simply be the mean value of the count when irradiance = 0, that is, the measured physical black level.

Assuming there is no bias in the reflectance panel values, your estimation of K would be exact only for the mean of the count (DN) over the panel and then drift in precision for count larger or smaller to the degree that irradiance is not perfectly linear in count.

Have you corrected for vignetting as shown in SEQ-AN-02?


#27

Here is an example of an ndvi map of a field using the Sequoia without any calibration/normalization and then the same data after applying vignetting correction, using the Sensor Model tag and normalizing against the sunshine luminance data.

No attitude information from the sunshine sensor or the multispectral sensor is being used.


#28

Yes, thats an old image actually. [quote=“domenzain, post:26, topic:5597”]
@muzammil360, as the SensorModel tag is missing I imagine you’re doing a first order approximation count = A’ exposure gain irradiance + B’. As there are two coefficients to fit, you need at least two measurements. One of those can simply be the mean value of the count when irradiance = 0, that is, the measured physical black level.

Assuming there is no bias in the reflectance panel values, your estimation of K would be exact only for the mean of the count (DN) over the panel and then drift in precision for count larger or smaller to the degree that irradiance is not perfectly linear in count.
[/quote]

@domenzain, in order to calculate Sequoia Irradiance, I am using this formula: count ∝ exposue * gain * SunIrradiance. This does take into account A' but would model B' as zero i.e. black level of sunshine sensor is assumed to be zero (which most probably practically would not be zero).

I took the part of refelectance panel near the middle. Therefore, I assumed that vignette correction would not make much difference. However, now I think it would be a good idea to use that.


#29

@seanmcleod, very very nice share. Does this indicate the shadowy region was covered with clouds?


#30

Nice share, but it seems to be still strange value in second output … We still show square fields inside the main field !


#31

@seanmcleod, how did you produce it? I mean what did you use for image stitching? Can you process the same with Pix4D and upload a comparison?


#32

No I’m pretty sure it doesn’t. If you take a look at the report I generate below that we use for ‘debugging’ survey flights you’ll notice that the sun was roughly just past north east, and by looking at the photo numbers you can see that the fixed wing drone flew roughly from east to west and then turned and flew west to east repeating this cycle twice for this field.

And so I think the banding effect you see is a side-effect of exposure differences etc. in flying towards and away from the sun.

In the yellow annotation blocks the first number is the photo number followed by the exposure number (EV) and then the CH0 count from the sunshine sensor (L). In this case for the NIR channel.

And here is a plot of the exposure values for each channel during the flight for this field.

And lastly a plot of the CH0 values from the sunshine sensor.

We use AgiSoft for the stitching. I don’t have a license for Pix4D, hence the reason for me to perform the corrections etc. before feeding the images to AgiSoft.


#33

Although the example I shared above showed a good improvement, although not perfect, when applying vigenetting correction, exposure normalization (SensorModel) and luminance normalization here is an example where applying the same set of corrections/normalizations didn’t result in an improvement.

In fact it didn’t seem to make any difference at all, with the same banding/blockiness compared to when we processed the flight without applying any of the corrections/normalizations.

The blue dots indicate the location of the individual photos taken during the flight.

Any thoughts on what could be causing this?


#34

@Domenzain any comments/suggestions?


#35

Someone can explain how is the “pixel value” calculated and what it refers to ???


#36

Hi Sean,

Sorry for the delay. I can think of two things currently that could be at fault.

Firstly bidirectional reflectance distribution function (BRDF) effects with insufficient overlap.

  • Do you see a noticeable solar reflection in the source images (actual BRDF of a homogeneous canopy)?
  • Or a lower reflectance at nadir (differently mixed soil and plant pixels depending on angle of observation)?

Both these effects are reduced by adding an extra perpendicular pass to the original flight plan (homogenization of the observation angles). And the first one is also helped by flying under diffuse light if possible (mitigation of the directionality by having an effectively isotropic light source).

It is not always clear what the desired result is when imaging a highly directional field.
What agronomists usually mean is an equivalent diffuse reflectance. The more angles of observation, the more the resulting orthomosaic will approach such a thing. And the exact computation will depend on the photogrammetry software; and will usually be a weighed average…

Knowing the poses of the camera after photogrammetry allows estimation and correction of the BRDF, but this certainly requires you to do the projection yourself and to define what you mean by correction.

Secondly, humans are very good at detecting contrast and hue variations. Are the differences at the frontiers significant? That is, measure the variance on an homogeneous area at each side of the frontier, then on an identical area including the frontier and see whether the banding affects your measurement other than aesthetically at your desired uncertainty (three sigma?).

Personally I like removing soil at some arbitrary NDVI value, and then visualising vegetation with a perceptually uniform colour scale like matplotlib’s viridis.


#37

Hi Jule,

The pixel value definition depends on your photogrammetry software.

At its most basic it is an unweighed average of all imaged observations for a given surface element.
Usually it is at least weighed by proximity to the area, position in the image, angle of observation, etc.


#38

Hi, can you explain how you calculate the sunshine irradiance in detail?
I get the information from your words that is you using the equation Irradiance = ch0 count / gain(gain_index, reference_gain) * exposure time. From the image exif, the gain_index is always 1, and how do you decide the value of reference_gain, just like domenzain describes? [quote=“domenzain, post:4, topic:5261”]
Also note the choice of reference_gain is arbitrary as long as you keep the same reference everywhere.
[/quote] Can you explain it for me in details? I am blocked by this question for much time and really wish get your kind help and reply. Thanks very much !


#39

Does anyone have the data that lie behind these sensitivity plots? The figures themselves are of little use for my purposes - namely that I’m looking to convolve the sequoia values to satellite sensor responses.


#40

Dear @lupinthief
Here is the spectral transmissivity of the filters themselves. Unfortunately we can’t give you the quantum efficiency of the detector itself as it is regarded confidential information by the manufacturer.
Nevertheless, I guess you can either try to interpolate the curve above or find a similar curve of a CMOS sensor.

Spectral transmissivity Monocam.zip (72.4 KB)