Question: "Test images" for multispectral image processing

warren is asking a question about near-infrared-camera
Follow this topic

by warren | June 21, 2017 20:48 | #14565


Hi, all - talking with some collaborators, I'm interested in collecting a set of test images to validate different software for infrared/visible analysis, such as with the #infragram project, the Photo Monitoring Plugin, and various other approaches, like Photoshop or Gimp-based methods:

https://publiclab.org/wiki/ndvi#Activities

Basically, what are some good "before and after" images that could show an image before and after NDVI compositing?

With these, we can "run" NDVI using any of the above software, and compare what the program makes to the expected "after" image to confirm that it works. These would also make for really good examples of what NDVI and other composites actually do!

This is particularly useful for @ccpandhare's project image-sequencer -- his summer project, which will perform NDVI among other things:

Thanks!



6 Comments

Hey @ccpandhare - just a heads up on this!

Reply to this comment...


Hi Jeff - I think I might be able to help with this but I'm not sure what you are looking for. NDVI is simply an algorithm applied to two bands so you could create a test program that looks as a specific pixel in two bands and see if the output pixel has the correct value. The output NDVI value will be accurate as long as the input bands are calibrated. If the input bands are not calibrated you have to wing it (a subjective process not really related to the software package) and could test the output only if you have some reference features in the image with known reflectance. I have a feeling I'm missing the gist of your question.

Hi, Ned - i'm not asking about calibration at this point, just for a way to empirically test (in the sense of automated testing -- https://en.wikipedia.org/wiki/Test_automation) to confirm that various pieces of software are correctly implementing NDVI and outputting a correct image based on a given source image. By confirming this programmatically (literally running it and then doing a computed diff against a known correct output image) we can have an empirical way to ensure that a given piece of software produces an agreed-upon "correct" output, without having to specify a particular methodology. For NDVI, we really have to consider more than just the mathematical expression, because we have to consider how that's presented as R, G, B pixel values from 0-255, and different ways of achieving this (per-pixel math, vs. GPU-based 'shader' approaches) may mean the implementation varies, but the output of course should not.

So what we're looking for is an ideal "before and after" image pair showing an input image and a "correct" output NDVI result image.

@ccpandhare is approaching this here: https://publiclab.org/questions/ccpandhare/07-08-2017/how-to-verify-if-my-programmatically-generated-ndvi-version-of-an-image-is-correct

and here; https://github.com/publiclab/image-sequencer/issues/34


Thanks for the amazing explanation and reference, @warren! I'll be grateful for your help @nedhorning and @warren :)


How about testing it on subsets of satellite imagery? You can pick the 3 bands and the band order you want for the testing and can compare that to "actual” NDVI values that are either available as products or you could calculate NDVI on your own to use as a reference. I’d be happy to help if that approach seems sensible.

Is this a question? Click here to post it to the Questions page.


Reply to this comment...


Thanks for that @warren!

Reply to this comment...


Log in to comment