Hi @seanmcleod,

Following the recommendations in SEQ-AN-01 and with your example above, here are the details for a single pixel of the Sequoia CMOS (the imager).

Firstly, the assumptions:

- We are imaging a flat surface parallel to the imager.
- The imaged surface is perfectly matte (Lambertian).
- On each of the images, the imaged surface is larger than a single pixel (no mixed pixels).
- The sun is the only light source and its rays are all parallel; and perpendicular to the imaged surface.
- There are no surrounding objects that reflect Sunlight.
- The atmosphere has perfect transmission.
- The imaged surface is sufficiently close to the optical centre of the images, such that vignetting and distortion can be neglected.
- The imaged surface has reflectance R, which is constant over the entire spectral band of interest.
- The Sunshine diffuser behaves like a perfectly matte surface (Lambertian).
- The light source is constant over a spectral band.
- The light source intensity changes in time, but only negligibly during an acquisition.

For drone telemetry, these are fair assumptions or simplifications that can be taken into account separately.

Estimate the irradiance of both Sunshine and Sequoia (SEQ-AN-01) over the same spectral band. Please note there are worked out examples there to obtain irradiance in arbitrary units on both these links.

Let’s call these `sunshine`

and `sequoia`

respectively. These are homogeneous to Wm^-2 and correspond to the spectral angular irradiance integrated over each sensor’s subtended solid angles and spectral bands.

The sun illuminates the imaged surface integrated over the given spectral band with some arbitrary irradiance `sun`

(in Wm^-2) perpendiculary at the time of the first image. And the surface then reflects this arbitrary irradiance with an attenuation of `R cos(theta)`

where `theta`

is the angle of observation with respect to the normal of the surface.

Since our observation is at nadir and along the optical centre of the image (to a good enough approximation that vignetting and distortion do not matter), then `theta`

is `0`

and we have `sequoia ∝ R sun`

, and `sunshine ∝ sun`

or explicitly `sequoia = k R sun`

and `sunshine = k' sun`

. The ratio `k/k'`

would represent the different solid angles subtended by imager pixel and the Sunshine. In general these are difficult to measure.

Solving for `sun`

and equating `sunshine`

and `sequoia`

, then solving for `R`

we find: `R = (k' sequoia) / (k sunshine)`

or `R = K sequoia / sunshine`

.

Following the same steps for a second image with `sun`

replaced by `sun'`

we find the same relationship. By scaling `sequoia'`

with `sunshine'`

you can put the same imaged surface in units common to both acquisitions.

You will notice that `K`

is not explicitly given.

One way of going around that is by using a calibration image with a target of known reflectance `Rref`

, often a spectralon panel. Which gives you `K = Rref sunshine / sequoia`

. This is known as reflectance target/tarp/panel calibration.

In order to remove assumptions then you need to account for more and more details. A few examples are:

- Use photogrammetry to estimate the normal angle to the object at every point.
- Use photogrammetry to account for reflections.
- Account for vignetting and distortion with the real optical centre.
- Use a goniometer and measure exact angular response with a point light source.