Public Lab Research note

Mobius Action Cam Infragram tests

by warren | April 10, 2014 15:46 10 Apr 15:46 | #10291 | #10291

What I want to do

Based on an ongoing discussion about whether the Mobius Action Cam (a ~70 gram 5-megapixel camera) can be used to take Infragram photos, Mathew Lippincott modified one and took some test photos, comparing each to a photo from the well-documented Infragram Webcam.

Early failure

The earlier discussion seemed to suggest that typical NDVI processing does not work, in part because the Mobius test image was brighter in the visible channel than the supposed infrared channel, due to infrared leakage in all three color bands.

An alternative approach

However, further investigation with the Infragram Sandbox tool seemed to show that although there was leakage, a custom index could be used to identify differentiation between the channels which, similarly to NDVI, correlates with the presence of plants.

My attempt and results

Mathew sent me his test photos, chosen for different conditions as well as examples of "non-vegetation on a background of vegetation" and "vegetation on a background of non-vegetation", similar to the comprehensive tests done at LEAFFEST last fall.

Here are the raw images, for three different scenes. I've chosen to use the Rosco 2007 filter, as it seemed to return better results. The order is webcam, then Mobius:

webcam_moss.png mobius_moss2007.png

webcam_table.png mobius2007_table.png

webcam_play.png mobius_play2007.png

I fiddled around with the indices for a while before I found something that seemed consistent and clear between the two cameras. It's very close to NDVI, but has an additive and a multiplicative term which set where it begins and ends on the HSV color wheel. I'd like to standardize it, but the only one I really varied was the additive one, which should set the plant/nonplant "threshold".

A basic math error had caused trouble with this section! But with some modifications, the technique seems to still be valid. You can see the original erroneous images and equations here.

HSV example: H: (R-B)/(R+B)*-4, S: 1, V: 1

In the above example, there is no additive value, and the multiplicative is the "/-4". The additive value shifts the scale around the color wheel, while the multiplicative boosts the amount of the color wheel used to represent the data. Once we improve color mapping, this will be a lot simpler... anyways:

Here are the processed images, again with webcam first, Mobius second:

H:0.3-(R-B)/(R+B)*3 for webcam, -(R-B)/(R+B)*4 for Mobius:

2014-04-10T21_02_07.892Z.jpg 2014-04-10T21_04_14.282Z.jpg

H:0.35-((R-B)/(R+B)*3) for webcam, -(R-B)/(R+B)*4 for Mobius:

2014-04-10T21_08_02.010Z.jpg 2014-04-10T21_09_43.822Z.jpg

H:0.3-((R-B)/(R+B)*2) for webcam, -((R-B)/(R+B)*3) for Mobius:

2014-04-10T21_14_15.360Z.jpg 2014-04-10T21_18_33.110Z.jpg

Note: After the corrections from Chris, the images aren't AS alike as before. Discuss.

Finally, here are all three Infragram Webcam images in "conventional" NDVI, unscaled:

2014-04-10T15_39_03.413Z.png 2014-04-10T15_39_21.324Z.png 2014-04-10T15_39_46.563Z.png

Questions and next steps

The differentiation I see here looks good to me, in that the things I see that "should" be alive and productive are read that way with relatively few false positives (sky, book). I'm happy that we're back to using something very close to NDVI again; I wonder why that didn't work on earlier examples, and perhaps it's related to very bright sunlight causing overexposure or clipping?

To me, the biggest question remaining is how much we "give up" by allowing the NDVI to be brightness-corrected, which is essentially what I'm doing with these images. They still display useful information if they are not stretched, but you really see more detail if they are. The zero threshold which is supposed to distinguish living from non-living is kind of thrown out here, but is that simply a reflection of how we aren't able to control relative exposure between color channels anyways? Is this really much different from choosing a more "aggressive" and customized color lookup table (LUT)?

Please pick all my assumptions apart and tell me what you think!

Update: here are all the links to the images:


I think it might be possible to work on a simple calibration process that could be used to get decent NDVI values. I don't think there is much a user can modify with these cameras regarding settings and if that's the case we could come up with a formula tuned for the Mobius Action Cam.

In the next couple weeks I hope to break out my cameras again to pick up calibration tests where I left off last year and would be happy to work with the Action Cam if someone has one they can send me for a week or two.

Reply to this comment...

It looks like you've done a good job matching the NDVI index from the two cameras. It's hard to evaluate the meaningfulness of the NDVI images because 1) there is no RGB photo so I don't know what is healthy green plant and what is not, and 2) the color gradient used is not displayed.

I don't know whether finding a way to display plant health information with the HSV color wheel can be translated into applying a standard look up table for colors. Will each NDVI image created use a slightly different arc of the color wheel? That would make it hard to compare images.

I don't think the zero threshold between non-plant and plant NDVI values is important. Without a rigorous calibration procedure, consumer cameras will never put the NDVI values on the correct absolute numbers, and never scale them correctly. So stretching and sliding the values will usually be part of the process (that's part of what the custom white balance is doing). The key is having a device that allows quantifying the difference between NIR and visible light with some usable dynamic range. I can't tell if you have that or not.

What is the order of operations for javascript? I though division happened before addition. If so, your formulas are not like NDVI. So addition must come first (this is why I have to use so many parentheses).

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

Mathew only sent me one raw image:


Reply to this comment...

Corrections made re: order of operations, thank you Chris for a sharp eye and a clearer memory of how to do basic arithmetic! And you said you couldn't code.

It's not AS good as it was. But I think it still works, and that I could've worked harder to get the images to match by using different stretches. Perhaps if we build a really solid levels-stretching interface, this can work. Thoughts?

Regarding HSV color wheel -- it's an imperfect way to do color mapping for a variety of reasons. Here, i used it because we can quickly get color within an Infragrammar approach. I think an Infragrammar version of linear colormapping, where you can map it to a lookup table, perhaps selected from a dropdown of different LUTs, would be much better and we should code that up pronto.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

Is the levels stretching you are referring to the same thing as histogram stretching? If so, it should be high priority.

If the look up table feature gets incorporated into Infragrammar, maybe we should revisit the lut I have been using and promoting. To make it possible to interpret and troubleshoot everybody's NDVI results, it would be good to have a single lut. Otherwise, the NDVI images have limited meaning, especially because few people post a key to their NDVI images.

If there will be a single lut, it should be a good one. One shortcoming of the one I have been using is that it is not friendly to those with protanopia. Green represents low plant vigor and red represents high plant vigor, so the images are pretty useless to those with red-green color blindness.


Also, I wonder if making the threshold between plant and non-plant, which is currently at NDVI=0, closer to NDVI=0.1 would be better. With the current lut, green can be low vigor plant (0.1 to 0.2) or it can be non-plant (0.0 to 0.1).

Another useful thing is to have the 0 and 255 values distinct colors so you know if your protocol is producing lots of those values (which usually means there is a serious artifact creeping in). Here is a similar lut as above with white or black for the end members.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

Login to comment.