10 bit capture for 16 bit Tiff



The Parrot Sequoia has a 10 bit maximum sensor, however, all tiff images that are taken as 10 bit images are saved as 16 bit Tiff files.

Could someone please explain the implications of this? How does this affect the raw 10 bit data when it’s converted to a 16 bit file?

How does this change the Digital Numbers of what the sensor is capturing?

In terms of radiometric calibration, what formula needs to be applied to offset the change from 10 bit to 16 bit?

Thank you,



Hi @Jman841,

As is common with sensors that have bit depths that do not fit exactly within a byte multiple, the data is simply multiplied (left shifted) by 2^d, where d is the difference in bits of the container size in bits minus the data size in bits.

Here it would be d = 16 - 10 = 6
So the transformation is new = original * 2 ^ 6
As the range of the original data is 0 to 2^10-1, the new data has range 0 to 2^6 * (2^10 -1).