@domenzain let’s take the simplest case.
My drone is flying along above a farmer’s field and let’s assume that the sun is at zenith and that both the sunshine sensor and the Sequoia cameras have a pitch and roll angle of 0 degrees so the cameras are pointing at nadir.
And assume there are no clouds.
And the Sequoia takes two images (Image1 and Image2) 1s apart and assume there is a large overlap between the 2 images e.g. of 70%.
So now I have the following data:
(Image1_pixel_value, Image1_iso, Image1_aperture, Image1_exposuretime, Image1_ch0, Image1_gainindex, Image1_integrationtime)
(Image2_pixel_value, Image2_iso, Image2_aperture, Image2_exposuretime, Image2_ch0, Image2_gainindex, Image2_integrationtime)
So how do I calculate the reflectance value for the Image1 pixel?
And if we assume that the Image1_pixel_value and Image2_pixel_value overlap and refer to the same ground point can we use that to get a better estimate of the reflectance value?