Rutvij bought an infrared Mobius camera (Infragram Point & Shoot) from the Public Lab store. ...
Public Lab is an open community which collaboratively develops accessible, open source, Do-It-Yourself technologies for investigating local environmental health and justice issues.
Public Lab chatroom
Reset your password
cfastie asked on December 05, 2015 15:59
2,394 views | 0 answers | #12482
Rutvij bought an infrared Mobius camera (Infragram Point & Shoot) from the Public Lab store. He took a photo of a tree and processed it three ways: at infragram.org, with Ned's Photo Monitoring plugin for Fiji, and with python code which predates infragram.org. The results from each method are quite different and he asked me why. I thought this was a good question. Which method is best, and are the others unreliable? If so, should they be removed as options for users who can find links to them at the Public Lab site? Should the process of getting reliable NDVI results be made less confusing for first-time users?
Above: Rutvij's photo with the Infragram Point & Shoot. This is a Mobius ActionCam with the internal IR block filter replaced with a Rosco Fire red filter.
Above: NDVI image processed at infragram.org.
Above: NDVI image processed with Ned Horning's Photo Monitoring plugin for Fiji.
Above: NDVI image processed with python code from https://github.com/p-v-o-s/infragram-js.
Great framing Chris, thank you. I have the same question
Are those three NDVI using the same color map? Perhaps an option to create NDVI without the colormap might clear things up a bit.
Is this a question? Click here to post it to the Questions page.
Thanks wmaiouiru, that's a good point. All three NDVI images use a different color map. I don't know what the Python code image uses for a color map, but that method is more or less deprecated, and I assume few people use it. The infragram.org result above uses a different color map than the Fiji result, but infragram.org does allow you to use a color map very similar to the one used in the Fiji result. At infragram.org/sandbox/, you must chose "Fastie colormap" under "3. COLOR." The other difference between the Fiji result and the infragram.org result is that the Fiji plugin allows you to "stretch the histograms" of the VIS and/or NIR channels. This somehow magically makes the NDVI results much more "realistic." It is not possible to do this trick at infragram.org.
So if the infragram.org result has the Fastie colormap applied (and is reshaped to its original aspect ratio), and the Fiji result does not have the histogram stretch applied, the results look like this:
First image is from Fiji with no histogram stretch and the NDVI_VGYRM.lut color map applied. Second image is from infragram.org with the red filter preset, the Fastie colormap, and the aspect ratio restored.
It might be possible to use Infragrammar in the infragram sandbox to simulate a histogram stretch, but I couldn't figure out how to do that. Most of the people who have the facility to write infragrammar expressions might be more likely to just use Fiji (which is free and awesome). I assume the histogram stretch trick is on the list of features to add to infragram.org. However, this trick is just that -- it makes the result look more like real NDVI, but the NDVI values are not necessarily comparable to legacy NDVI or NDVI from other photos or other cameras.
So a more important upgrade to infragram.org might be to add the ability to calibrate the NDVI result using targets in the photographed scene. But the histogram stretch might still be handy for when there are no targets (and finding targets might be an obstacle).
The link above to the Python code for converting photos to NDVI images is not a link at Public Lab. However, a version of this Python code is available at the Public Lab Github repository: https://github.com/publiclab/infrapix/blob/master/README.md. It appears that the code is intended for blue filtered cameras only and has one color mapping scheme. Rutvij found this code and used it to make the NDVI image below.
The two GitHub repositories with code to convert Infragram photos to NDVI are nice resources for those who know what to do with them. They are not intended as resources for beginners and don't have comprehensive user manuals. If people find them and want to use them they should be able to contact the authors to ask questions, but Public Lab probably does not have any obligation to make this resource any more user-friendly. The fact that this python code produces different results from infragram.org or the Fiji plugin is confusing but not surprising.
The path of least resistance for new users of infragram cameras is probably infragram.org. I don't know what the user experience is like there for beginners. People seem to be doing lots of interesting things there but it's hard to tell whether they get what they are trying to get.
If I were selling Infragram cameras so that customers could learn something from NDVI images, I would hesitate to send them to infragram.org to process their photos without supplying an explanation of the limitations and alternatives.
I guess it's not possible to know how many people use the Photo Monitoring Fiji plugin or what success people have with it.
i agree with chris. infragram.org doesn't reflect current best practices or our research interests either, as ned continues to advance calibration in Fiji.
I've received staff push-back on directing new users to Fiji, but I want to depreciate the Infragram.org workflow and have the project focus back on Fiji.
A couple of comments ago I included NDVI images from infragram.org and Fiji that are essentially the same. This sort of confirms that infragram.org can be used to produce results that are similar to results from Fiji and are reasonably meaningful. However:
The first point above could be addressed with a good user manual for infragram.org. However:
Hi, Chris - we're (finally) hoping to address this in a few different ways, and thanks for your excellent and thoughtful suggestions. I've been breaking up the docs for some Infragram work into smaller Activities (starting with https://publiclab.org/wiki/infragram-point-shoot) and also posting new documentation including a video walkthrough. While long overdue, I think we can do a lot now (with support from NASA and Google) to put some capacity towards this.
Another parallel track is that we're making it easier (in the back-end coding sense) to modify and improve image processing with an eye towards a better workflow for Infragram.org, using Image Sequencer, which I showed you at LEAFFEST:
The demo for doing NDVI is here, although it's just an early prototype:
But the sequencer architecture allows for new modules, and a histogram stretch could be one of them. More soon, and thanks again!
Log in to comment
Sign up or Login to post an answer to this question.
Best low cost camera for indoor plants?
updated 5 days ago
Is it possible to calculate average NDVI in a picture using infragram's sand box?
updated 3 months ago
Problem with filter for Raspberry Pi NoIR Camera
updated 4 months ago
Why do the plant colors look the way they do?
updated 4 months ago
Which RGB channel is the near-infrared saved in for the red and blue filter?
Camera module for Computer Vision/LED Plant Measurement System
updated 5 months ago
600 year old cedar tree - Infragram
updated 5 months ago
How can I can make a raspberry pi camera just with the filters process the crops and immediately produce the results to a computer or to other device, without having to use the website?
updated 5 months ago