Reflectance Estimation


#61

You can see the exif information in your images using exiftool from here. If you want to understand how tif files are converted to reflectance, these two documents are going to be helpful

https://1drv.ms/b/s!Au_f2CPYR0HD-DYszS5NTkXt-2kU

http://forum.developer.parrot.com/uploads/default/original/2X/3/383261d35e33f1f375ee49e9c7a9b10071d2bf9d.pdf


#62

Thanks, I will test the new camera update on wednesday.
Keep you informed!


#63

Hi everbody,

I actually found the problem.
Neither IrfanView nor the Python module “exifread” show all tags that are hidden in the image.

Finally all tags could be extracted with exiftools in the command line.
Unfortunately it does not work in Python.

Anyway, the problem was not the firmware version.

Thanks for your help


#64

Workaround here :

Clone or download archive and install with
python setup.py install


#65

Hi @kikislater,

For reflectance calculation, we must obtain some information which is embedded in image file as a exif information. While some of them are fixed (a,b,c coefficients), others are changeable (expsure time…) image by image. As you know we need exposure time to calculate reflectance but nearly all of the images have different value. So how can we obtain this value automaticly? or we must obtain this values manually to process our images.

Best…


#66

Hi,
Simply write a python script. Using python exiftool module help. You could parse every file in a batch and compute it for every pixel.


#67

The answer will given by parrot probably like this :slight_smile:


#68

Did you notice ? As it will be no surprising they hide my comment… Let’s make it on twitter now, should be more relevant !


#69

Dear @kikislater
Part of your comment has been edited because it was againstthe rules of this forum.
The rest has been left untouched as you can see.
Despite several warnings, you decided to carry on posting content that goes against the guidelines of this forum.
We will be happy to discuss it by a phone call or through skype, you have our contact information.


#70

Since Sequoia captures Irradiance, why there is solid angle concern? Shouldn’t solid angle only be applicable to radiance only? Correct me if I was wrong. Isn’t that the ratio k/k’ is combinations of many things such as sensitivity…etc But not sold angle specifically.


#71

Hi to all, thanks for all your help.

Is the correct order to process the following?

1- Calculate Pixel Irradiance
2- Calculate Sunshine sensor Irradiance
3- Calculate Reflectance (R=Ips/I’ss) (I don’t hace calibration panel)
4- Apply Vignetting Correction
5- Create ortomosaic per band
6- Calculate vegetative indexs


#72

Dear @muzammil360,

Can I use online decoder to decode BASE64? Just copy the text from “Irradiance List : .TbdeCAAAAAC5ENYDAQBkACGdCcOASi5A4jHfQArAYQgA …” until the end. Is it ok?

After I decoded them, what’s next in order to calculate Sunshine Irradiance?


#73

@jpdelped,

  1. You might wanna apply vignette compensation at the start.
  2. You can’t really use that forumal (R=Ips/I’ss) unless you have calibration panel. You can look here for more info on that.
  3. I am not sure about computing orthomosaic on reflectance images.

#74

Using online decoders won’t really help you much. You will still need a programming language to make something out of data. Try to look relevent sections of this document.


#75

Hi woo, sorry for the delayed reply.

It is an artefact of the calibration technique.

You can see it as a scale factor to convert the arbitrary units of one sensor to those of the other (which are also arbitrary but different).
Equivalently, you can interpret it as a solid angle mismatch between sensors of identical arbitrary units.

The interpretation is of no consequence to the reflectance estimation.


#76

De pronto puedes compartir el proceso completo que has realizado, estoy leyendo sus comentarios y me gustaría intentar también este proceso, aunque mi nivel de lenguaje de programación es nulo…

Veo que calculan o extraen los datos codificados de la imagen… esto lo hacen de una sola imagen, por ejemplo de la imagen que posee la foto de la tarjeta de calibración o usan el script para decodificar de todas todas las imagenes la irradiencelist…?

saludos y gracias de pronto puedas compartir el proceso de una manera mas especifica para usuarios no tan avanzados.

saludos

'Google translate"
Suddenly you can share the complete process that you have done, I am reading your comments and I would like to try this process, even though my level of programming language is null … I see that they calculate or extract the encoded data of the image … they do it from a single image, for example of the image that has the photo of the calibration card or use the script to decode all the images irradiencelist …? greetings and thanks suddenly you can share the process in a more specific way for not so advanced users. regards


#77

Hola @marlonfgj1,

El coeficiente K es calculado únicamente para las imágenes de alguna tarjeta de calibración. Luego este mismo coeficiente se aplica directamente a las otras imágenes de un mismo vuelo para encontrar la reflectividad de la escena.

Tristemente, este tipo de cálculos necesitan una comprensión profunda del tema (para asegurarse de hacer un cálculo con sentido) así como de programación (para garantizar que la implementación es correcta y eficaz).

La liga al artículo de la NASA más arriba resume bastante bien el proceso sin programación, pero asume un nivel de física inevitable.
De otro modo, aplicaciones como Pix4D son necesarias para poder explotar e interpretar los datos de forma más abstracta.

Cordialmente,

'Google Translate’
Hello @ marlonfgj1,

The coefficient K is calculated only for the images of a calibration card. Then this same coefficient is applied directly to the other images of the same flight to find the reflectivity of the scene.

Sadly, this type of calculations need a deep understanding of the subject (to make sure you make a meaningful calculation) as well as programming (to ensure that the implementation is correct and effective).

The link to the NASA article above sums up the process quite well without programming, but assumes an unavoidable level of physics.
Otherwise, applications like Pix4D are necessary to be able to exploit and interpret the data in a more abstract way.

Cordially,


#78

Muchas gracias por su respuesta, voy a empezar este calculo extrayendo los archivos del exif, que parece que es lo primero que se debe hacer… me parece necesario intentarlo de esta manea, ya que adquirir pix4d es demasiado costoso aún.
saludos y muchas gracias… de alguna manera quisiera saber que no mas están haciendo por otros lados con la cámara en cuanto a proyectos de investigación, si pudieran compartir sería genial.

'Google Translate’
Thank you very much for your answer, I will start this calculation by extracting the exif files, which seems to be the first thing to do … I think it is necessary to try this way, since acquiring pix4d is still too expensive.
greetings and thank you very much … somehow I would like to know that no more are doing elsewhere with the camera in terms of research projects, if they could share would be great.


#79

The Sequoia is a direct replacement for the Multispec 4C? I have been able to calculate reflectance for Sequoia data using: https://1drv.ms/b/s!Au_f2CPYR0HD-DYszS5NTkXt-2kU and http://forum.developer.parrot.com/uploads/default/original/2X/3/383261d35e33f1f375ee49e9c7a9b10071d2bf9d.pdf.
However, I also want to process Multispec 4C data. I was told the process should be the same for the Sequoia and the Multispec 4C, however, some of the EXIF and XMP tags are different in both cameras. Can you give me some brief info on how the processing of Multispec 4C reflectance differs from that of the Sequoia?


#80

@jeromeoc, unfortunately I have never worked with Multispec 4C. So, I really can’t comment on it’s work flow. But it is my understanding that if you have been able to understand Sequoia work flow and procedure, understanding Multispec 4C or any other camera should not be too much difficult.