Public Lab Research note


  • 1

Calibrating DIY NIR cameras - part 3

by nedhorning |

This is the third and final note of a series on DIY NIR camera calibration. The first part is at: http://publiclab.org/notes/nedhorning/10-21-2013/calibrating-diy-nir-cameras-part-1 and the second part is at: http://publiclab.org/notes/nedhorning/10-23-2013/calibrating-diy-nir-cameras-part-2

In this note I show NDVI images created using the calibration procedure discribed in part 1 to calibrated photos acquired from a dual visible / near-infrared (NIR) setup. I also present an example NDVI image created by calibrating directly to NDVI using a photo from a single camera with a blue filter.

For the dual-camera setup both cameras were Canon A495s. One was not altered and the other had the hot mirror replaced by unexposed developed film so the camera recorded mostly near-infrared light. The first image in the sequence below is a false color image from the dual-camera setup created using the Fiji plugin. I put this here so you can assess how well the visible and NIR images are co-registered. The NDVI images were created using the red and NIR bands from this false color image. The first NDVI image below was created using the Fiji plugin and the second was created by calibrating the red band from the visible camera and the blue band from the NIR camera as descripted in part 1. The third image is the same as the second image but I used a gray gradient look-up-table since the color image is confusing with red representing NDVI values of less than or equal to 0 and 1. Both sets of NDVI images were created from photos that had white balance set using a cinder block and no exposure compensation. The last NDVI image was created from a Canon A810 with a Rosco 2008 filter using the simple (single band) regression methods described in part 1.

LookUpTableSmall.jpg

A495Vis_0_NRG.jpg

_A false color image from a Canon A495 dual-camera setup using Fiji plugin _

A495Vis_0_NDVI_FloatSpectrum.jpg

NDVI from a Canon A495 dual-camera setup using Fiji plugin with no stretch

A495Vis_0_NRG_NDVI1.jpg

NDVI from a Canon A495 dual-camera setup using single regression calibration

A495Vis_0_NRG_NDVI1Gray.jpg

Same as the above image but displayed with a gray look-up-table

A810Rosco2008_Block_0_NDVI1.jpg

NDVI from a A810 with a Rosco 2008 filter using simple regression calibration

One thing to notice with the dual-camera images compared to the single band image is that non-photosynthetic targets have low NDVI values in the dual-camera photos and high NDVI in the single camera photos. The reason for this was discussed briefly in part 1. NDVI calculated using blue light instead of red light with produce high NDVI values when the the reflectance of blue light is much lower than reflectance of NIR light which is quite often the case with non-photosynthetic material. That effect is clear with the wood and insulation but also with leaves in the grass. This tends to indicate that using a blue filter is not ideal for calculating NDVI since differentiating between photosynthetic and nonphotosynthetic material is not always clear.

Another thing to note is that the NDVI image from the dual-camera setup using simple regression calibration is that the values for grass are higher than they should be. I expect this is primarily an artifact of how I collected sample pixels to compare with the reference NDVI values so I am reasonably confident this can be ameliorated using better reference targets.

The last set of NDVI images from this discussion were created by calibrating directly to NDVI values instead of calibrating the individual bands and then creating an NDVI image. For independent (predictor) variables I used the blue and red bands from an image acquired using the Canon A810 with a Rosco 2008 filter with white balance from a cinder block and no exposure compensation. The dependent (response) variable was NDVI calculated from the reference reflectance data using wavelengths of 650 nm (red) and 840 nm (NIR) which should be similar to NDVI derived from Landsat Thematic Mapper imagery. I also tried a multiple regression using all three bands.

LookUpTableSmall.jpg

A810Rosco2008_Block_0_NDVIReg.jpg

NDVI from Canon A810 with a Rosco 2008 filter using multiple (blue + red ) regression to calibrate directly to NDVI

A810Rosco2008_Block_0_NDVIReg3Bands.jpg _ NDVI from Canon A810 with a Rosco 2008 filter using multiple (blue + green + red ) regression to calibrate directly to NDVI_

The results from calibrating directly to NDVI using the blue and red bands seem quite improved, at least for the non-photosynthetic targets when compared to the the NDVI image created by first calibrating the individual bands to reflectance than calculating NDVI. Adding the green band into the regression seems to produce a noisy image.

I will try to continue developing a calibration protocol with the hope of developing something that will be easy to use to produce decent reproducible NDVI imagery. The two primary tasks are creating a reference panel and writing software for processing. I also would like to play with a red filter to see if there are advantages when compared to using a blue filter for calculating NDVI with a single camera system.



near-infrared-camera ndvi calibration infrared infrablue calibrate-ndvi


17 Comments

A blue filter should be an acceptable alternative to a red filter when measuring vegetation if the spectral response of the blue filter is sensitive to chlorophyll absorption (~460nm). You may be able to determine the spectral sensitivity of a camera/filter pair if you have access to a monochromator or maybe just look at the filter's spectral response function.

Blue would be a better choice than red if you are calculating the NDVI of any vegetation that has yellow flowers. On the other hand, a major drawback to using blue light is that the atmosphere scatters it much more than red light (Rayleigh scattering) and so surface reflectance measurements in blue wavebands will be higher than they should unless you perform an atmospheric adjustment (i.e. dark object subtraction) or an atmospheric correction. This is an issue even for balloon mapping since the atmosphere gets thicker as you approach ground level.

It may be useful to consider using one white and one black reflectance target to calibrate an image to reflectance rather than directly calibrating to some reference NDVI value. Maybe use a teflon chip and a brushed panel coated with extra flat black paint. The white and black reference targets will span the entire range of reflectance values for surface features within an image, thereby improving the accuracy of subsequent NDVI calculations.


Thanks for your comments. I am rethinking the idea that a blue filter is sensitive to chlorophyll absorption. Blue wavelengths have similar reflectance properties when compared to red for green healthy vegetation but as vegetation senesces the red response jumps up a lot and blue hardly changes. The NIR drops a bit but not enough to produce great contrast in NDVI between living and dead vegetation. I put a table at the bottom of part 1 of the calibration notes (http://publiclab.org/notes/nedhorning/10-21-2013/calibrating-diy-nir-cameras-part-1) that shows the difference in NDVI for a few materials when using blue and red in the NDVI equation. I was quite surprised to see so many materials that have spectral curves with low blue and high red and NIR reflectance properties. This could use some more investigation but that's my primary reason for looking into red filters. There are some practical issues related to using red filters but I think I'm on the way to sorting those out.

I am considering the use of a two-color reference panel but am concerned that use only the extreme reflectance values but am concerned that the dynamic range of the cameras we are using might saturate well before they should with the black or bright white targets. That concern might be unfounded but I'd like to test it. I'd love to get access to a monochromator to test sensor sensitivity and figure out how to record radiance but haven't had luck so far. I'm using an array of LEDs at the moment with moderate success but that's only giving me relative differences. Using the spectral curves of the filters is useful to a point but the filters over the sensor (Bayer filter) adds complexity since they are have very wide band-pass characteristics and in addition to the portions of the visible spectrum they are designed to pass they also pass lots of NIR light. The LEDs are a nice learning tool to put some logic to which wavelengths are recorded in each channel with different filters.


I am confused about the idea of using a red filter instead of blue in a one camera NDVI system. Is the idea to use the red channel for visible light and use the blue channel for NIR light? The reason the Canon Powershots work as well as they do as infrablue cameras is that the blue channel does not record too much NIR, presumably because the CFA (Bayer filters) over the blue pixels do not pass much NIR. So a red filter that passed red visible light but not blue could allow insufficient NIR to reach the blue channel. There might also be lots of NIR contaminating the red channel, but that needs to be tested. This approach might be a solution for inexpensive CMOS cameras that seem to have a very different CFA and let lots of NIR into the blue channel. But maybe you were thinking about a different approach.

For aerial photos from a few hundred feet up on a clear day, scattering of blue light (haze) should not be too much of a problem. If your goal is absolute values for NDVI after calibrating on reflectance targets, those targets should be in some of the aerial photos, so the scattering will be accounted for. If you goal is just relative NDVI values, scattering is not an issue as long as the altitude of the camera is constant. Scattering is certainly a problem for higher altitude photos, especially in hazy conditions. The effect is conspicuous in oblique aerial photos (http://publiclab.org/notes/cfastie/06-26-2013/infrablue-haze). It would be good to learn more about correcting for scattering. How would you do that? The amount of scattering will depend on the particle composition of the air, which would have to be measured? Another approach might be to use a polarizing filter since scattered blue light is polarized. But that polarization depends on the angle at which the scattered light is emitted relative to its incident angle. In certain situations, e.g., when the camera is pointed 90° to the incoming sunlight, scattered light arriving at the camera is well polarized. Otherwise only some of it is polarized. For vertical kite or balloon photos in the middle of the day, most of the scattered light may not be polarized, but the haze could be partially reduced if the camera (and polarizing filter) orientation could be controlled.

Is this a question? Click here to post it to the Questions page.


Chris - It's interesting that you find that the Canon Powershots don't record much NIR in the blue channel. I found just the opposite - that the blue channel records (very) slightly more NIR than the red channel. When I take a photo of a NIR LED with a camera that has a 850nm NIR pass filter the blue channel is somewhat brighter than the red channel if I set the white balance using a gray card. In the research note you posted doing a similar test the blue channel also recorded a lot of NIR light unless you set white balance using a blue card which is not (at least I don't think it is) representative of what is actually being recorded by the sensor. In any case I'm working on a protocol to subtract the NIR influence from the red channel when a red filter is used and it seems to be working. Also, the blue channel on my red filter camera is very sensitive to NIR light so for now I'm not worried about that.

As far as atmospheric correction there are several different methods that are used and abused in remote sensing. The choice often depends on what ancillary data you have to work with. In the absence of actual atmospheric data there are some models/algorithms that let you set a coefficient based on what you think the atmospheric conditions were when the photo was taken and there are others that use dark targets on the ground that should have very low or no reflectance like water to calibrate (basically just shift the histograms until the dark target pixels are set to zero. I know of very few cases where atmospheric corrections were done on aerial photos. I expect it's more complex than it is for satellite imagery. From my limited experience polarization filters don't help much with haze in aerial photos but maybe I'm not using them properly.


Oh yeah, I forgot I actually measured that, and you're right a little more NIR gets into the blue channel (I used a 735nm cutoff filter). I need to start reading my own research notes. So especially if you can correct for the NIR getting into the red channel, we need a filter like the Rosco #026:

Rosco_26.JPG

.

That would certainly improve the scattering problem. I wonder if it would work better with Jane?

What red filter are you using?

Is this a question? Click here to post it to the Questions page.


I got a 3" x 3" Wratten 25A gel filter on ebay for $10. There's a lot left over if you want to try it.


That's a much better filter for the job:

Wratten_25.JPG

It would be great to try some before all the greenery disappears.


If it's sunny tomorrow I'll be out with the wood, tar paper and other things. With a little luck I'll have a research note before the end of the week.


I noticed that you are capturing your test images from an oblique angle. Bidirectional reflectance at that angle may be creating "hot spots" in your image. It's just something to keep in mind when collecting & analyzing your data. http://www-modis.bu.edu/brdf/brdfexpl.html

Also, the use of bright and dark targets is common for image calibration in biophysical remote sensing. White panels coated with barium sulfate or Spectralon are ideal because they have reflectances near 1 and have very little specular reflection. I recommended teflon because it is an inexpensive alternative to Spectralon yet has similar spectral properties - unless you buy really cheap teflon. I would be more concerned about NDVI saturating.


That's a good observation. The hotspot in these test images would tend to be toward the bottom around the shadow from Chris' head since that's where the camera was. Having a hotspot in the image isn't necessarily related to the oblique angle - that same phenomenon can be seen when images are acquired looking straight down (nadir) if the sun is fairly high. In that case you can see the hotspot and the point of specular reflection in the same image at equal but opposite distance from the principal point. My experience with BRDF is mostly with pixel sizes of 30m and larger and it's quite different and apparently much more complex with close range imaging where we're resolving individual blades of grass. It sure would be nice to deal with BRDF but I need to put thought into how best to do that in different situation. Perhaps the best we can do is as you mention - keep it in mind. If you have thoughts about how to reduce BRDF effects or, better yet, use it to improve automated classification I'd be interested in hearing them.


If you take pictures at higher angles (more perpendicular and less oblique) you may achieve a more Lambertian response. Dealing more directly with BRDF using balloon platforms may not be practical though. If you want to perform automated classification then you will need at least 3 bands, unless you are just trying to segment an image into veg and non-veg. What is your classification task?

Is this a question? Click here to post it to the Questions page.


I'm curious why you need at least 3 bands for automated classification. I don't have a specific classification task in mind. In general I'm working on identifying different features (predictor variables) from DIY aerial imagery for a range of classification tasks. I have a strong interest in this and reasonable experience. If you have some thoughts perhaps we can take this discussion to email since it's heading off topic? Then again, I don't mind continuing the discussion here.

Is this a question? Click here to post it to the Questions page.


You want to capture enough spectral variation between different surface features. If you only have 2 bands but have several classes then you may get lots of omission/commission errors due to spectral overlap. That's just a rule of thumb I was suggesting without first ascertaining your goal. 2-bands may work for classes with very distinct signatures across the wavebands you are using. Or you could use the time signature of a 2-band index based on a time-series of observations to sift out classes based on phenological characteristics. It all really depends on the application and I recommended 3 bands just to be safe.


I have been making some progress toward standardized calibration targets. Does anyone viewing this know of a source for inexpensive paints/coatings with a high, diffuse and fairly flat spectral response? The coatings I have used in the past (barium sulfate, halon/teflon) are quite pricey and I don't have easy access to those any more. Some common household paints have published reflectance curves but they tend to be limited to the visible spectrum. That's better than nothing but probably not as good as a recommendation based on experience.

Is this a question? Click here to post it to the Questions page.


Krylon extra flat black spray paint is diffuse and spectrally flat across the vis-nir. A small sheet of teflon isn't too expensive and works great (it isn't a coating but it's easy to clean), except for the cheap teflon sold on Amazon - it's not spectrally flat. I currently have access to a spectroradiometer and a calibrated spectralon panel so I can record reflectance curves from 350-2500nm for any inexpensive coating you think might work well.


Thanks for the suggestion. Maybe a teflon sheet is the way to go although I don't have any experience with these. Some of the thin sheets seem translucent. Do you know if this is the case? Do you have any sense for the minimum thickness for a calibration target?

Is this a question? Click here to post it to the Questions page.


Light is transmitted but it is diffusely scattered past a certain depth, which seems to be ~7mm. The minimum thickness increases with the size of the target. A 15mm wide target could be 1.6mm thick whereas a 30mm wide target should be closer to 3.2mm thick. I have a 2mm thick teflon sheet that I found to be too thin to lay directly on the ground because absorption features of soil/vegetation seeped through. So, if you have a thin sheet (i.e. 1.6mm-3.2mm) you should elevate it so that there is a large air interface between the ground and the teflon. The teflon sheets for sale on Amazon don't seem to be as high quality as the stuff you can get from http://www.mcmaster.com/#sheets-%28made-with-teflon-ptfe%29/=pjtydw


You must be logged in to comment.