Question: How can one Infragram photo produce different NDVI results?

cfastie is asking a question about infragram
Follow this topic

by cfastie | December 05, 2015 15:59 | #12482


Rutvij bought an infrared Mobius camera (Infragram Point & Shoot) from the Public Lab store. He took a photo of a tree and processed it three ways: at infragram.org, with Ned's Photo Monitoring plugin for Fiji, and with python code which predates infragram.org. The results from each method are quite different and he asked me why. I thought this was a good question. Which method is best, and are the others unreliable? If so, should they be removed as options for users who can find links to them at the Public Lab site? Should the process of getting reliable NDVI results be made less confusing for first-time users?

Mobius__camera__actual_image.jpg
Above: Rutvij's photo with the Infragram Point & Shoot. This is a Mobius ActionCam with the internal IR block filter replaced with a Rosco Fire red filter.

Converted_from_public_lab_website.png
Above: NDVI image processed at infragram.org.

Fiji_plugin_defaults.JPG
Above: NDVI image processed with Ned Horning's Photo Monitoring plugin for Fiji.

Converted_from_python.jpg
Above: NDVI image processed with python code from https://github.com/p-v-o-s/infragram-js.



7 Comments

Great framing Chris, thank you. I have the same question

Reply to this comment...


Are those three NDVI using the same color map? Perhaps an option to create NDVI without the colormap might clear things up a bit.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Thanks wmaiouiru, that's a good point. All three NDVI images use a different color map. I don't know what the Python code image uses for a color map, but that method is more or less deprecated, and I assume few people use it. The infragram.org result above uses a different color map than the Fiji result, but infragram.org does allow you to use a color map very similar to the one used in the Fiji result. At infragram.org/sandbox/, you must chose "Fastie colormap" under "3. COLOR." The other difference between the Fiji result and the infragram.org result is that the Fiji plugin allows you to "stretch the histograms" of the VIS and/or NIR channels. This somehow magically makes the NDVI results much more "realistic." It is not possible to do this trick at infragram.org.

So if the infragram.org result has the Fastie colormap applied (and is reshaped to its original aspect ratio), and the Fiji result does not have the histogram stretch applied, the results look like this:

Fiji_plugin_noStretch.JPG InfragramFstieShpe.jpg
First image is from Fiji with no histogram stretch and the NDVI_VGYRM.lut color map applied. Second image is from infragram.org with the red filter preset, the Fastie colormap, and the aspect ratio restored.

It might be possible to use Infragrammar in the infragram sandbox to simulate a histogram stretch, but I couldn't figure out how to do that. Most of the people who have the facility to write infragrammar expressions might be more likely to just use Fiji (which is free and awesome). I assume the histogram stretch trick is on the list of features to add to infragram.org. However, this trick is just that -- it makes the result look more like real NDVI, but the NDVI values are not necessarily comparable to legacy NDVI or NDVI from other photos or other cameras.

So a more important upgrade to infragram.org might be to add the ability to calibrate the NDVI result using targets in the photographed scene. But the histogram stretch might still be handy for when there are no targets (and finding targets might be an obstacle).

Chris

Reply to this comment...


The link above to the Python code for converting photos to NDVI images is not a link at Public Lab. However, a version of this Python code is available at the Public Lab Github repository: https://github.com/publiclab/infrapix/blob/master/README.md. It appears that the code is intended for blue filtered cameras only and has one color mapping scheme. Rutvij found this code and used it to make the NDVI image below.

Python_code.JPG

The two GitHub repositories with code to convert Infragram photos to NDVI are nice resources for those who know what to do with them. They are not intended as resources for beginners and don't have comprehensive user manuals. If people find them and want to use them they should be able to contact the authors to ask questions, but Public Lab probably does not have any obligation to make this resource any more user-friendly. The fact that this python code produces different results from infragram.org or the Fiji plugin is confusing but not surprising.

The path of least resistance for new users of infragram cameras is probably infragram.org. I don't know what the user experience is like there for beginners. People seem to be doing lots of interesting things there but it's hard to tell whether they get what they are trying to get.

If I were selling Infragram cameras so that customers could learn something from NDVI images, I would hesitate to send them to infragram.org to process their photos without supplying an explanation of the limitations and alternatives.

I guess it's not possible to know how many people use the Photo Monitoring Fiji plugin or what success people have with it.

Reply to this comment...


i agree with chris. infragram.org doesn't reflect current best practices or our research interests either, as ned continues to advance calibration in Fiji.

I've received staff push-back on directing new users to Fiji, but I want to depreciate the Infragram.org workflow and have the project focus back on Fiji.

Reply to this comment...


A couple of comments ago I included NDVI images from infragram.org and Fiji that are essentially the same. This sort of confirms that infragram.org can be used to produce results that are similar to results from Fiji and are reasonably meaningful. However:

  • There is no way for a typical user to know that the particular NDVI image that I included is probably the most meaningful NDVI image that you could get from that photo at infragram.org. There are lots of options at infragram.org and infragrammar allows infinite tweaking of variables. So I suspect that 90% of users have no idea whether any particular result they get is a good result or whether it could be made more meaningful.
  • The lack of histogram stretching means that the NDVI results from infragram.org will never have the dynamic range that can be derived from single camera infrared photos. So results from infragram.org cannot match those from Fiji for ease of interpretation.

The first point above could be addressed with a good user manual for infragram.org. However:

  • There are a lot of options at infragram.org (legacy vs. sandbox, HSV vs. RGB, etc.) so writing a good user manual is somewhat involved.
  • It is likely that an important source of unsatisfying results from infragram.org is terrible infragram photos (dull in, dull out). So an infragram.org user manual might not help many users who have not learned how to get the best photos from their infrared cameras and don't understand the severe limitations on photo quality imposed by certain cameras. A separate user manual is needed to describe using various kinds of cameras with different filters in different light environments to achieve different objectives.

Reply to this comment...


Hi, Chris - we're (finally) hoping to address this in a few different ways, and thanks for your excellent and thoughtful suggestions. I've been breaking up the docs for some Infragram work into smaller Activities (starting with https://publiclab.org/wiki/infragram-point-shoot) and also posting new documentation including a video walkthrough. While long overdue, I think we can do a lot now (with support from NASA and Google) to put some capacity towards this.

Another parallel track is that we're making it easier (in the back-end coding sense) to modify and improve image processing with an eye towards a better workflow for Infragram.org, using Image Sequencer, which I showed you at LEAFFEST:

https://github.com/publiclab/image-sequencer/

The demo for doing NDVI is here, although it's just an early prototype:

https://publiclab.github.io/image-sequencer/examples/#steps=ndvi-red,segmented-colormap

But the sequencer architecture allows for new modules, and a histogram stretch could be one of them. More soon, and thanks again!

Reply to this comment...


Log in to comment