@seanmcleod: fair enough, we’ll keep this discussion public.
I have though to warn you that we may not answer all questions regarding Sequoia since some information that you may required may be subject to confidentiality agreements.
As I said previously, some companies have partnerships with Parrot and thus have access to more things that we are allowed to share publicly.
@seanmcleod: fair enough, we’ll keep this discussion public.
@clement.fallet as a starting point before you even produce the workflow document I think Parrot needs to make very clear asap what users without a confidentiality agreement can expect to achieve with the Sequoia unit once the relevant documentation has been produced.
For example without using Pix4d (or having a confidentiality agreement with Parrot) what sorts of corrections/calibrations can users expect to perform on their multispectral data with and without a ground reflectance target.
Given the marketing and advertising material in terms of the multispectral sensors with an irradiance sensor a lot of users may have quite different expectations compared to the warning you’re now mentioning in terms of “we may not answer all questions regarding Sequoia…”
Parrot should clearly indicate what corrections we would be able to perform and what we would not be able to perform due to Parrot’s personal confidentiality agreements. I would like to put a basic list of corrections:
- Black Current Compensatin
- Vignette Compensation
- Exposure Compensation (ISO + Shutter speed)
- Radiometric Calibration
4.1 Using Micasense Panel only
4.2 Using Micasense + Sunshine Sensor (Irradiance List)
4.3 Using Sushine sensor only
4.4 Sun angle correction
- Fisheye Correction
- Image Registration (alignment of different bands)
I believe some information regarding the expected release document would help both the parrot and Sequoia developer/research community.
@muzammil360: thank you for making it extremely clear for me to understand your demands.
We’ll get back to you with an announcement in the coming days stating what to expect
Here is a snippet of code in Python to decode the IrradianceList tag which is an array of the following structure:
uint64_t timestamp (us) uint16_t CH0 (count) uint16_t CH1 (count) uint16_t gain index uint16_t integration time (ms) float yaw float pitch float roll
It looks like the sunshine sensor logs data at 5Hz.
import sys import os import glob import exiftool import base64 import struct irradiance_list_tag = 'XMP:IrradianceList' irradiance_calibration_measurement_golden_tag = 'XMP:IrradianceCalibrationMeasurementGolden' irradiance_calibration_measurement_tag = 'XMP:IrradianceCalibrationMeasurement' tags = [ irradiance_list_tag, irradiance_calibration_measurement_tag ] directory = 'test' channels = [ 'RED', 'NIR' ] index = 0 for channel in channels: files = glob.glob(os.path.join(directory, '*' + channel + '*')) with exiftool.ExifTool() as et: metadata = et.get_tags_batch(tags, files) for file_metadata in metadata: irradiance_list = file_metadata[irradiance_list_tag] irradiance_calibration_measurement = file_metadata[irradiance_calibration_measurement_tag] irradiance_list_bytes = base64.b64decode(irradiance_list) print(files[index]) index += 1 for irradiance_data in struct.iter_unpack("QHHHHfff", irradiance_list_bytes): print(irradiance_data)
Some sample output:
test\IMG_170228_084229_0134_RED.TIF (305450198, 2712, 678, 1, 100, -176.65206909179688, 14.487373352050781, -2.3586034774780273) (305649993, 2975, 743, 1, 100, -177.49917602539062, 12.91478157043457, -3.684767007827759) (305853880, 3313, 828, 1, 100, -175.5269012451172, 11.912470817565918, -6.285056114196777) (306056894, 3557, 889, 1, 100, -172.70018005371094, 9.969759941101074, -9.783087730407715) (306252229, 3784, 945, 1, 100, -172.89474487304688, 9.146184921264648, -9.128634452819824) (306449090, 3869, 967, 1, 100, -173.9973602294922, 5.676121234893799, -10.742831230163574) (306649644, 3844, 960, 1, 100, -171.62594604492188, 1.8265786170959473, -12.721040725708008) (306849148, 3845, 960, 1, 100, -171.8560791015625, 1.8103898763656616, -12.682068824768066) (307047905, 3794, 948, 1, 100, -173.06724548339844, -1.9556128978729248, -12.612716674804688)
Have you calculated some irradiance from the example above ?? If so, please share the example I am not getting the point here. My objective is to calculate the Irradiance for each sensor.
Ch0 values in the example are proportional to irradiance.
Yes but let’s say I want to calculate the Irradiance it will be something like:
Irradiance = count / gain(gain_index, reference_gain) * exposure
Concretely, Sequoia is designed to measure reflectance, not irradiance.
As mentioned above in response to @muzammil360 (here):
The units are not SI units.
Even field spectrometers (and most laboratory spectrometers) are not calibrated in that sense.
If they are, it will depend on whether it is through vacuum or air (and the air temperature, and its humidity and slight movements in the optics and…).
To be more precise:
Units are homogeneous to SI units, but have an arbitrary scaling factor.
I recommend the mentioned article, as it walks through the typical remote-sensing application.
If you have a spectral-radiance-calibrated light source, then with a very careful measurement you can figure out the exact scaling to obtain SI units. This is more than likely not what you intend.
@domenzain are you referring to this article?
@seanmcleod Yes. This gives solid advice on the hypotheses that are made and the formulas that should be used.
@domenzain let’s take the simplest case.
My drone is flying along above a farmer’s field and let’s assume that the sun is at zenith and that both the sunshine sensor and the Sequoia cameras have a pitch and roll angle of 0 degrees so the cameras are pointing at nadir.
And assume there are no clouds.
And the Sequoia takes two images (Image1 and Image2) 1s apart and assume there is a large overlap between the 2 images e.g. of 70%.
So now I have the following data:
(Image1_pixel_value, Image1_iso, Image1_aperture, Image1_exposuretime, Image1_ch0, Image1_gainindex, Image1_integrationtime)
(Image2_pixel_value, Image2_iso, Image2_aperture, Image2_exposuretime, Image2_ch0, Image2_gainindex, Image2_integrationtime)
So how do I calculate the reflectance value for the Image1 pixel?
And if we assume that the Image1_pixel_value and Image2_pixel_value overlap and refer to the same ground point can we use that to get a better estimate of the reflectance value?
@seanmcleod, I think we are missing some info in order to do what you are proposing. Even if we assumed that both of your images were taken at the same ISO and exposure time, I think you’d need to know the sensitivity of both the sunshine sensor and the sequoia sensor to each of the 4 bands in order to calculate reflectance. For example, looking at the graph I posted earlier, it seems that the sunshine sensor is much less sensitive to the RedEdge than to other bands.
@pk123 yes but the sensitivity/quantum efficiency of the two sensors is known by Parrot and they’ve even published at least a fairly rough (resolution wise) version of it online in their marketing material.
They’ve used the same filters for each band on the sunshine sensor and the multi-spectral sensors and would know the lens information for each etc.
In terms of the RedEdge you’ll notice that the filter band is much narrower compared to the other bands.
I have read all in this forum and I am still confused for this question.
@domenzain you said Sequoia is designed for measuring the reflectance value rather than the irradiance, but in the files of converting irradiance to reflectance, you pointed the key point is to getting the K, which means the ratio of the irradance of sunshine sensor and the irradiance of sequoia sensor can be calculated by the tags in the exif, so I want to know how to get the precise value of the irradiance of sunshine sensor, now I have known how to calculate the irradiance value of sequoia sensor.
Please directly give me the solution rather than give me another links. Thanks very much!