Sorry for the delay. I can think of two things currently that could be at fault.
Firstly bidirectional reflectance distribution function (BRDF) effects with insufficient overlap.
- Do you see a noticeable solar reflection in the source images (actual BRDF of a homogeneous canopy)?
- Or a lower reflectance at nadir (differently mixed soil and plant pixels depending on angle of observation)?
Both these effects are reduced by adding an extra perpendicular pass to the original flight plan (homogenization of the observation angles). And the first one is also helped by flying under diffuse light if possible (mitigation of the directionality by having an effectively isotropic light source).
It is not always clear what the desired result is when imaging a highly directional field.
What agronomists usually mean is an equivalent diffuse reflectance. The more angles of observation, the more the resulting orthomosaic will approach such a thing. And the exact computation will depend on the photogrammetry software; and will usually be a weighed average...
Knowing the poses of the camera after photogrammetry allows estimation and correction of the BRDF, but this certainly requires you to do the projection yourself and to define what you mean by correction.
Secondly, humans are very good at detecting contrast and hue variations. Are the differences at the frontiers significant? That is, measure the variance on an homogeneous area at each side of the frontier, then on an identical area including the frontier and see whether the banding affects your measurement other than aesthetically at your desired uncertainty (three sigma?).
Personally I like removing soil at some arbitrary NDVI value, and then visualising vegetation with a perceptually uniform colour scale like matplotlib's viridis.