Want to talk about soil? Join us for Open Call every Tuesday until Mar. 30 and our kick-off call on this topic on Feb. 2! Click here for details on how to join!

# Mobius, stripped

by cfastie | 06 Sep 01:13

Image above: The three channels in part of an infrablue photo taken with a Mobius ActionCam modified with a Rosco filter. All three channels record lots of infrared light. (the images are dark because they each have about a third of the brightness of the original photo)

Carl just posted a great description of his first results with a Mobius ActionCam modified to take infrablue photos. The \$70 camera takes very nice normal photos with good color and sharpness. Carl did not say which infrablue filter he used, but it might be a Rosco #2007. The color of the infrablue photo is a little different from most other infrablue photos I have seen, and looks bright and crisp. The image above indicates that all three channels in this image have lots of infrared information in them -- tree leaves and grass look like they are covered in snow. As usual, the green channel is a little darker, suggesting that it is recording less infrared. But none of the channels looks like it it would be a good source of information about the visual part of the spectrum that plants use.

Histograms for the grass area in the infrablue photo taken by the Mobius ActionCam. The blue channel for healthy grass is brighter than the red channel, suggesting that there is more IR recorded in the blue channel. The same is true for the tree foliage (not shown).

The histogram of the infrablue photo reveals something I have not seen before. The problem with many inexpensive CMOS cameras is that there is not much separation between the blue and red channels -- both have lots of IR in them. With the Mobius camera, there is separation between them, but the blue channel is brighter than (to the right of) the red, presumably because it has more IR in it. That is new.

Output from Infragram.org. All the NDVI values are below zero.

So when NDVI is calculated, the difference between R and B is negative, and all NDVI values are therefore negative. This could be adjusted, but there may not be much reason to do so. The NDVI values might appear to have meaning because healthy foliage has higher values than other things, but that seems to be an artifact. It appears that, as in most other small CMOS cameras tested, all the channels are contaminated with IR, and no channel is a good representative of the light used by plants for photosynthesis.

I wonder if, although the absolute NDVI values seem not to be helpful, the images may still be able to distinguish between healthy and unhealthy vegetation. It wouldn't be the same as NDVI, but if we're looking at, for example, IR+R, IR+G, and IR+B, might we still be able to extract meaningful information from the images -- just with different band arithmetic?

Is this a question? Click here to post it to the Questions page.

There might be some math magic that will help, but that is beyond me. It will be important to check that whatever math is done, the resulting index does not just highlight the high reflection of NIR from foliage. Any IR image will make leaves look bright, but that effect has limited ability to reveal plant health for the reasons discussed here: https://groups.google.com/forum/#!topic/plots-infrared/OJXWI2geBfw.

In your set (IR+R, IR+G, and IR+B), IR is probably different in each channel, so there are really six variables, and you don't know what any of them are. Good luck.

Hey just bumping this back up -- using Infragram.org Mathew L and I did a quick re-try on this, using the expression ((R-B)/(R+B))*-1 in the Hue for HSV:

http://infragram.org/i/5329dd53fc2b472433000c07?src=1395252442200_imag0015.jpg&mode=infragrammar_hsv&h=%28%28R-B%29/%28R+B%29%29*-1&s=1&v=1

It looks OK, so we tried just doing a monochrome diff between B and R; boosting it threefold 1-3*(B-R) :

http://infragram.org/i/5329de1afc2b472433000c08?src=1395252761947_imag0015.jpg&mode=infragrammar_mono&m=1-3*%28b-r%29

There's a false positive on the pavilion roof but it seems like there is measurable and fairly low-noise differentiation between R and B channels. We would of course be REALLY HAPPY if we could use a Mobius for the Point and Shoot Infragram... Chris, can you help battle-test this idea? Is our methodology flawed? Does this merit more tests?

Is this a question? Click here to post it to the Questions page.

More tests are definitely merited.

First, I was assuming that Carl was using a Rosco blue filter, and my original conclusions were based on that. That's one thing to check.

Second, Your new images don't seem to add much new information. Carl's infrablue photo, from a camera with no IR block filter, shows very bright plant foliage in all three channels. That is almost certainly because there is lots of NIR in all three channels. So just about any image you make from that photo is going to show plants with bright foliage, and some non-foliage things that are not as bright. It looks like good discrimination between plant and non-plant, and it might be pretty good discrimination, but it is probably not any better than a pure NIR photo. There is much value in a pure NIR photo, but it is not NDVI. It is not even false color infrared, because that uses two visible light channels, which this camera apparently does not have even one of.

A good area for further research is suggested by the histogram above showing that foliage in the blue channel is the brightest of the three channels. In an infrablue photo, the blue channel is supposed to be the one with mostly visible light. But that camera is apparently recording a LOT of NIR in the blue channel. Therefore, it might be that using a Wratten 25A filter instead of a blue filter would work better. A Wratten 25A filter allows zero blue or blue-green light to reach the sensor, so the only thing getting to the blue channel is NIR. The red channel will have a mix of red and NIR, and maybe have a high enough proportion of red to be usable as a VIS channel.

Putting a Wratten 25A in a Mobius is certainly what I would do if I had one, and if there were any green plants within 200 miles of me.

I am new to this field, and am busy trying to mod a Mobius by removing the IR filter and adding a blue (red remove) filter. I have made a side by side mount, to allow to Mobii to take the same image.

Will the two images allow more processing? It seems that by working with the six channels there is more potential....

Also, if there are two cameras, is it necessary to add a filter? (after removing the IR filter). Could useful results be obtained by subtracting the filtered RBG channels from the unfirltered?

Is this a question? Click here to post it to the Questions page.

If you have two cameras, and your goal is NDVI or false color infrared, the best way to get what you need is one unmodified camera and one with the IR block filter replaced with a visible light block filter that passes only NIR. The Wratten 87 is a good choice, but there are lots of other long-pass infrared filters. Another option is a Wratten 25A which provides a pretty pure NIR channel in the camera's blue channel, but the Wratten 87 gives you all three channels of NIR, so three times more light (although that may be irrelevant).

Thanks Chris, but I'm not sure I agree that we're just seeing brightness due to infrared. I'm not saying this is NDVI -- it seems to work in the reverse, as you pointed out -- but that there is a systematic difference between the R and B channels. I'm not claiming I know the reason... we're in empirical-land here. But if it's consistent and can highlight vegetation, it could be a useful alternative to NDVI. Clearly more research is needed, and Mathew and I were thinking the same thing about using a red filter instead of a blue one.

There are green things here in Portland!

In short, while you've clearly shown there is IR in all 3 channels, there is still a reasonably good signal/noise ratio difference between 2 channels, and that difference does seem to correlate with the presence of plants... even if it does so in the reverse of how NDVI works. That all channels are reasonably bright doesn't change the fact that there's a difference, it only reduces the amount of difference we can measure (dynamic range).

I see what you mean when you say that the difference between the blue and red channels is correlated with the presence of plants. Your monochrome image illustrates that. When the difference is big (brighter), those pixels are usually plants. And that suggests that the difference is at least related to the difference between the reflection of NIR and visible light. We know the difference between NIR and visible should correlate with plants (plants absorb visible but not NIR). On the other hand, if the difference between the R and B channels in Carl's infrablue photo were due mostly to NIR, i.e., channel B just happened to have more NIR than channel R did, then we might not expect the result you found that plants are associated with a greater B-R difference. Instead we might expect the greatest difference between channel B and channel R to be randomly associated with various things in the image. Do you think we can rule out that the difference between the R and B channels is due to channel B having more NIR than channel R? If we can't, it might be that since plants will reflect much more NIR than most other surfaces will, and that NIR ends up in both the R and B channels, the plant pixels will have the biggest values for both R and B and the difference between them is therefore likely to be bigger. Or maybe that's just the Bulleit talking.

Is this a question? Click here to post it to the Questions page.

Thanks cfastie. I don't think I understand the need for the Wratten filter. My understanding is:

A camera with just the IR filter removed (i.e. no filter added thereafter) will give (N+R,N+G,N+B). If the images from the two cameras can be aligned pixel-perfect, then subtraction will yield (N,N,N) for each pixel. Then you have got (N,R,G,B) as desired.

To be more general, I don't know that each sensor type is equally sensitive to NIR, so strictly, the camera with no IR filter will yield: (R+N1,G+N2,B+N3)

The subtraction will then yield (N1,N2,N3). If N1 = N2 = N3, we are back to the previous case. If not, its even more interesting, because that will give 5 or 6 channels (frequency bands) of data per pixel!

So why add a filter in the two camera setup? What am I missing here?

Is this a question? Click here to post it to the Questions page.

That's a really clever idea, and I think it might work. However, it assumes that:

1. The photos must be simultaneous if anything in the scene is changing. That can be done.
2. The IR block filter cannot change the amount of visible light reaching the sensor. That assumption is probably not met, although most IR block filters probably block only very little R, G, or B.
3. The exposure has to be such that the same amount of RGB is captured by equivalent pixels in both cameras. I don't know how you would do that. You would have to know how much of the light reaching the sensor was visible versus NIR. If you knew that, you would already have the answer. But you can probably get a facsimile of NDVI, which is all we ever get anyway.

It's an easy thing to test if you have a camera with no IR block filter. Take some test photos with the unmodified and full spectrum camera, then put a long pass 720 nm filter in front of the full spectrum camera and take a third photo. Compute an NDVI image using both methods. Report back.

I too am interested to see whether a non-modified camera and a camera with the IR filter removed can be used to calculate NDVI. Could you please clarify point #2 and what you mean when you say that the IR filter cannot change the amount of visible light reaching the sensor.

To compute NDVI, you have to know the relative amount of light in a visible and an NIR band being reflected from plant leaves. The difference between the two bands (for each pixel) is the basis for NDVI. The system that records how bright each band is should not have completely independent methods of recording them. Generally there is more NIR than red reflected from leaves, but if one camera has a different filter or a different exposure setting, the value for red in one camera could be larger than the value for NIR in the other. NDVI from such a system would be meaningless.

If you want to subtract the R value determined by camera #1 (unmodified) from the R+NIR value determined by camera #2 (modified) in order to get a value for NIR, the R values from both cameras have to be equivalent. So if the IR block filter in camera #1 blocks some R, that value for R will be lower than the R value from camera #2 which has no IR block filter.

But the more important obstacle is point #3. How do you set the exposure on the two cameras so that the amount of R reaching the sensor is the same in each camera? There is an unknown amount of NIR in the R channel of the modified camera, so setting the exposure of the two cameras the same will always reduce the amount of R being recorded compared to the unmodified camera. How can you know how much this reduction is?

This problem does not go away if you replace the IR block filter with a visible-blocking IR pass filter (passes only NIR). The standard protocol is to take well exposed photos of the same scene with each camera and use the R value from the unmodified camera as R and any channel from the IR modified camera as NIR. But the particular exposure of those two photos determines the relative brightness of the two channels of interest. Modifying the exposure settings in one camera will change the relative brightness of R and NIR and change the value of NDVI.

In fact, it's hard to understand how any of these two-camera systems give such good NDVI results.

Is this a question? Click here to post it to the Questions page.

Thanks cfasie. I understand now the exposure issue you raise.

But you seem to assume the different exposures are an unknown, which it strikes me is not the case.

So how about, given: C1: unmodified camera, yields (R,G,B) for each pixel C2: IR filter removed, yields (R2,G2,B2)

C1 exposure is e1 seconds C2 will have more energy on the sensor, hence a lower exposure e2, where e2 < e1

So C2 will have, in the simple case: (N'+xR,N'+xG,N'+xB) where 0 < x < 1 The exposure will affect N as well, so taking N = N'/x C2 will have x(N+R,N+G,N+B) assuming that the exposure time affects all frequencies equally (seems reasonable?)

The factor x is deducible from the relative exposure: x = e2/e2 assuming a linear relationaship between exposure and sensor reading (is this also reasonable?)

So, one extracts the exposures from the exif data, and calculates x as above.

Then N can be calculated from R2/x - R = (xN + xR)/x - R = N + R - R = N

Putting this together with C2 yields: (N,R,G,B)

In the more complex case (N1,N2,N3) = (C2 - C1)/x where C1 and C2 are 3-tuples

f-stop is not relevant, because the Mobius has no iris.

How does that look to you?

Is this a question? Click here to post it to the Questions page.

...the plant pixels will have the biggest values for both R and B and the difference between them is therefore likely to be bigger.

Yeah, the idea that since plants have large NIR reflectivity, shown in both R and B, and since we're just subtracting. So if B is 2x R, a pixel with R=2 gets B=4 compared with a pixel with R=5 which gets B=10. It's multiplicative.

How could we prove that hypothesis wrong? We'd have to divide rather than subtract, so that the differences are proportional. Then any differences remaining would have to be due to the materials in the scene, right?

Here's R/B - which shows that the relationship is not linear:

http://infragram.org/i/5339ef2dfc2b4724330011f5?src=1396305553918_carl_large.jpg&mode=infragrammar_mono&m=R/B

I think that's not bad evidence. Please help poke a hole in this theory too, Chris!!! Mathew is also going to take a lot of test photos using the Mobius (with this new technique) and a Infrablue Canon and the Infragram Webcam this week.

Is this a question? Click here to post it to the Questions page.