Reflectance Estimation

@domenzain let’s take the simplest case.

My drone is flying along above a farmer’s field and let’s assume that the sun is at zenith and that both the sunshine sensor and the Sequoia cameras have a pitch and roll angle of 0 degrees so the cameras are pointing at nadir.

And assume there are no clouds.

And the Sequoia takes two images (Image1 and Image2) 1s apart and assume there is a large overlap between the 2 images e.g. of 70%.

So now I have the following data:

(Image1_pixel_value, Image1_iso, Image1_aperture, Image1_exposuretime, Image1_ch0, Image1_gainindex, Image1_integrationtime)

(Image2_pixel_value, Image2_iso, Image2_aperture, Image2_exposuretime, Image2_ch0, Image2_gainindex, Image2_integrationtime)

So how do I calculate the reflectance value for the Image1 pixel?

And if we assume that the Image1_pixel_value and Image2_pixel_value overlap and refer to the same ground point can we use that to get a better estimate of the reflectance value?

Thanks

Hi @seanmcleod,

Following the recommendations in SEQ-AN-01 and with your example above, here are the details for a single pixel of the Sequoia CMOS (the imager).

Firstly, the assumptions:

  1. We are imaging a flat surface parallel to the imager.
  2. The imaged surface is perfectly matte (Lambertian).
  3. On each of the images, the imaged surface is larger than a single pixel (no mixed pixels).
  4. The sun is the only light source and its rays are all parallel; and perpendicular to the imaged surface.
  5. There are no surrounding objects that reflect Sunlight.
  6. The atmosphere has perfect transmission.
  7. The imaged surface is sufficiently close to the optical centre of the images, such that vignetting and distortion can be neglected.
  8. The imaged surface has reflectance R, which is constant over the entire spectral band of interest.
  9. The Sunshine diffuser behaves like a perfectly matte surface (Lambertian).
  10. The light source is constant over a spectral band.
  11. The light source intensity changes in time, but only negligibly during an acquisition.

For drone telemetry, these are fair assumptions or simplifications that can be taken into account separately.

Estimate the irradiance of both Sunshine and Sequoia (SEQ-AN-01) over the same spectral band. Please note there are worked out examples there to obtain irradiance in arbitrary units on both these links.

Let’s call these sunshine and sequoia respectively. These are homogeneous to Wm^-2 and correspond to the spectral angular irradiance integrated over each sensor’s subtended solid angles and spectral bands.

The sun illuminates the imaged surface integrated over the given spectral band with some arbitrary irradiance sun (in Wm^-2) perpendiculary at the time of the first image. And the surface then reflects this arbitrary irradiance with an attenuation of R cos(theta) where theta is the angle of observation with respect to the normal of the surface.

Since our observation is at nadir and along the optical centre of the image (to a good enough approximation that vignetting and distortion do not matter), then theta is 0 and we have sequoia ∝ R sun, and sunshine ∝ sun or explicitly sequoia = k R sun and sunshine = k' sun. The ratio k/k' would represent the different solid angles subtended by imager pixel and the Sunshine. In general these are difficult to measure.

Solving for sun and equating sunshine and sequoia, then solving for R we find: R = (k' sequoia) / (k sunshine) or R = K sequoia / sunshine.

Following the same steps for a second image with sun replaced by sun' we find the same relationship. By scaling sequoia' with sunshine' you can put the same imaged surface in units common to both acquisitions.

You will notice that K is not explicitly given.

One way of going around that is by using a calibration image with a target of known reflectance Rref, often a spectralon panel. Which gives you K = Rref sunshine / sequoia. This is known as reflectance target/tarp/panel calibration.

In order to remove assumptions then you need to account for more and more details. A few examples are:

  1. Use photogrammetry to estimate the normal angle to the object at every point.
  2. Use photogrammetry to account for reflections.
  3. Account for vignetting and distortion with the real optical centre.
  4. Use a goniometer and measure exact angular response with a point light source.

Ok so now I understand why there is IMU data in exif …

@Domenzain thanks for the detailed response. A couple of initial questions on your response.

You mentioned: [quote=“domenzain, post:2, topic:5597”]
The ratio k/k’ would represent the different solid angles subtended by imager pixel and the Sunshine. In general these are difficult to measure.
[/quote]

Given the dimensions of the sensors and given knowledge of the lenses etc. wouldn’t it be fairly easy to work out the ratio of solid angles between the image sensor and the sunshine sensor?

Also wouldn’t the k/k' ratio be influenced by the difference in sensitivity between the image sensor and the sunshine sensor as shown below?

But again given the knowledge of this difference in sensitivity you could integrate the area under each band for the image sensor and the sunshine sensor to get the ratio of sensitivity on a per band basis between the image sensor and the sunshine sensor.

So if it is possible to work out the ratio of solid angles and the ratio of sensitivities then that should mean we could work out the k/k' ratio and therefore calculate the reflectance without using a calibration image?

You could estimate it within an order of magnitude easily.
A detailed geometrical model and calculation is less obvious.

Yes. Another way of seeing K is that it amalgamates every aspect of the optics that hasn’t already been taken into account.
And in principle applies on a per-pixel basis (practically, vignetting and distortion are enough to render all pixels comparable to each other for a given Sequoia). A perfectly modelled system with everything in SI units would always have a K=1.

Yes. I would strongly suggest measuring it rather than working it out.

A further clarification: measuring solid angles precisely for each pixel or working a goniometer is difficult; as is working out the exact values.

Setting up a repeatable environment with a known reflectance and managing the measurement uncertainty is easier, and by extension so is measuring this K by the last formula in my post. Which is why I “strongly suggest measuring it”.

In terms of calculating K using a calibration image, i.e. K = Rref sunshine / sequoia wouldn’t you only need to do this once per Sequoia unit as opposed to doing it for each flight? Or is there significant ‘drift’ in the sensors over time?

If it only has to be done once then presumably Parrot could perform the calibration once the unit is manufactured and just before they ship the unit?

Doing it for each flight is likely overkill.
Indeed, many aspects are difficult to control in the field and render a target measurement counterproductive:

  • Casting shadows or reflecting light onto the target (note the equations above have sun identical for both sensors for a given acquisition, a bright red shirt can screw up the measurement).
  • Not considering angle of observation (can be really bad when cos(theta) plays a role).
  • Having substantially less than the entire hemisphere for Sunshine or target (if one stands quite close to the drone, then the geometries are not comparable. This is more dramatic under purely diffuse light)

On the other hand, the only real way of ensuring that your data is consistent through time is measuring a known standard. This applies generally to any precision measurement device.
The frequency of calibration (measurement against a standard) will depend on the precision that you require.

I noticed there is a Irradiance Unit Scale exif tag, e.g. from one of our units:

Irradiance Unit Scale : 1.7873713395012918

Is this possibly a factory calculated value for k?

@seanmcleod, I also noticed that and discussed it in my post: Explanation of Exif Tags - Radiometric Calibration

Guys, I have tried to do radiometric calibration in Matlab and I am facing some issues. Let me share my procedure here.

I took *0001_NIR.TIF image (Sequoia calibration image) as my reference image and used it to calculate K. I knew reflectance Rref of my target. I calculated sunshine irradiance using Ch0Count/(gainxexposuretime). Then I calculated sequoia irradiance using the formula shared in parrot application note. This allowed me to calculate my gain K using K=Rref*sunshine/sequoia.

Once I know my K, I can use it to calculate R=K*Sequoia/Sunshine for any pixel value and thus the whole image. All this process looks simple and straight forward but results in some problems (and thus questions in my mind).

  1. There are some pixels for which R goes above 1 (as high as 1.6). How do we explain this?
  2. Any pixel below B value will results in negative reflectance. Do we force it to zero? This seems plausible only if B refers to black level compensation.
  3. If B refers to black level compensation, then how can we assume it to be constant for a certain camera? It should change with temperature and is definitely different from the value of BlackLevel tag in the exif.

The tag is related to the procedure described here, but it is not possible to use it reliably at present and thus Parrot does not communicate further on that point. It is not K.

Sequoia has an InvalidPixels tag to identify pixels that behave abnormally (hot, cold, dead, stuck). Although these represent a small number of the total number of pixels.

Any sensor has some inherent random error in its measurement, which is often normally distributed. This directly means that with respect to the mean value of the pixels corresponding to a reflectance target, some will be above and some below the correct value.

Photogrammetry will strongly mitigate this type of error. Since many points of view will contribute many reflectance measurements that are weighed to give a more accurate estimation for a given imaged area. The more samples are collected for a given area, the closer it will be to the mean value.

The usual way of dealing with that on a single-image basis is to simply drop the non-physical pixels. That is, drop any pixels that give reflectances outside of the interval ]0, 1[.

If you have values which are consistently and significantly over 1 then you might be dealing with a non-random measurement error. This could be introduced by an incorrect reference reflectance value, for example.

For any model, this question comes up at some point.
You can expect the mean variation of B in temperature to be around 200 count over 30K (note that there is a wider variance in the black level than that).

For an adequately exposed image acquired within 20K of working temperature this will represent less than 0.1% error due to temperature in the black level. Parrot considers this to be an acceptable error.

@domenzain, thanks a lot for the detailed response.

What do you mean by many points of view? Does that mean many pixels or many calibration shots?

Does that mean that I simply drop the pixels in white in the mask (see image below) as they produce a reflectance value of greater than 1? How about I force it to 1?

For increased situation awareness, please see the image below which maps DN with the reflectance value. Approximately 5% of the pixels cross the peak of 1. Does this look normal?

I’m pretty sure he’s referring to many pixels from multiple overlapping photos during flight of the same ground point. Although I’d guess many is on the order of about 4 depending on the percentage overlap and sidelap specified during your flight planning.

@muzammil360 could you share your sample calibration image? And what was the reflectance value for the calibration target you used?

@muzammil360 was there a shadow covering most of the reflectance panel in the sample image you displayed above?

Indeed I meant different images.
This is more for the final orthomosaics than for the reflectance target images.

It is not a good idea to force the values to 0 or 1 for two reasons: perfectly reflective or absorptive media do not occur in agricultural applications and all values beyond the threshold will get the same reflectance estimation.

The second reason is subtle. It causes a similar loss of information to simply dropping the data, but it will bias any further processing by introducing the probably false values.

As @seanmcleod suggests, with one of those images and more details about the calibration target there’s more detailed analysis to be made.

The imaged surface and the Sunshine sensor diffuser should be parallel and under identical lighting. See my initial response for the hypotheses that are made (and which should be verified for the equations to stand).

Parallel are you sure ?
http://forum.developer.parrot.com/uploads/default/original/2X/c/c42f1387fee1cb993d4dde991c018e00ed6533a5.jpg
In Disco Ag, it doesn’t seem parallel !

Please Parrot give information. As we discuss, your concurrent is satellite data’s. Provider from satellite give calibrated data or provide a well documented reflectance computation (Examples : https://landcover.usgs.gov/pdf/huang2.pdf https://landsat.usgs.gov/landsat-4-5-thematic-mapper-tm-calibration-notices https://landsat.usgs.gov/calibration ) .
Why it is not the same for Sequoia. It’s really boring and wasting time …

@kikislater @Domenzain isn’t saying that the sunshine sensor and the imaging sensor have to be parallel during flight conditions. Both the sunshine sensor and the imaging sensor have IMUs to record their respective attitude angles with the angles stored in the EXIF data.

So given the attitude information you can correct for the effects of the relative attitudes between the sensors, the sun, the ground etc.

However in his simplified formulas (so that you don’t have to deal with attitude differences) for calculating K the assumption is that they are parallel.

Hi @kikislater,

Please refer to my original response above. It details all the hypotheses I’ve made and how many of them affect the result.

When the Sunshine sensor it set up at an angle you need to account for that, clearly.