Normalized difference vegetation index (NDVI) has been computed from satellite and airborne data ...
Public Lab is an open community which collaboratively develops accessible, open source, Do-It-Yourself technologies for investigating local environmental health and justice issues.
Public Lab chatroom
Reset your password
Read more: publiclab.org/n/13058
Normalized difference vegetation index (NDVI) has been computed from satellite and airborne data for 45 years. The data used are measures of how much energy is reflected from vegetation as light in the red and near infrared (NIR) spectral bands. Healthy leaves reflect much more energy in the NIR than in the red band, so healthy plants have a big 'difference' which computes to a big value for NDVI. The instruments used to measure NDVI are not normal cameras, they are devices which measure radiant energy in particular spectral bands.
To use a photo from a consumer digital camera to compute NDVI, the brightness values captured in each pixel must be related to the amount of energy (radiance) being reflected from things in the photographed scene. In a typical photo, if more energy was reflected from one surface than another, the pixel values for the first surface will generally be higher than for the second surface. Although the camera sensor captures information that is directly (mostly linearly) related to the incoming energy, producing the jpeg image files involves additional processing. So determining the radiance from the jpeg pixel values (digital numbers or DNs) can involve undoing multiple inscrutable processing steps. Camera RAW image files have had little or no processing done so assigning radiance values to DNs is simpler.
NDVI requires data about both red and NIR radiance from each point (pixel) in the scene. Because NDVI is all about the difference between red and NIR, those two bands must be measured in the same way. Whether separate cameras or separate color channels from the same camera are used to capture red and NIR, it is critical that the values for red and NIR represent the actual relationship between the radiant energy in the incoming red and NIR bands. The lenses and filters used probably do not pass the same proportion of incoming red and NIR, and the camera's sensor is probably not equally sensitive to red and NIR. So even if camera RAW images are captured, the recorded "brightness" of red and NIR probably do not represent the actual difference in energy that was reflected from leaves in those two bands.
Figure 1. This is the relationship that we need in order to compute NDVI from a photo. For red light (shown) and also for NIR, we need points on the graph (which we hope will form a tidy line) so the DN for red in each pixel can be converted into a value for radiant energy (radiance) of incoming red light. We need a separate relationship for NIR.
To convert the DNs in a photo file to an estimate of radiance, we need to establish the relationship between the two (Figure 1). One way to do this is to include in the photo surfaces of known reflectivity in the particular red and NIR bands that the camera captures. These targets must be characterized so that the amount of red and NIR energy reflected from them is known (Figure 2). Some type of spectrometer is needed to characterize targets.
Figure 2. This is what we need to know for the targets that will be put in the scene we photograph. We need a graph like this for the red band we are capturing (shown) and also for the NIR band. So the red and NIR reflectivity of each target (five are shown but that is arbitrary) must be measured with a spectrometer.
The carefully characterized targets must then be included in some of the photos we take of plants to make NDVI images. In these photos, pixels which constitute the targets can be used to determine the DNs which result from each level of reflectivity of red or NIR (Figure 3).
Figure 3. This is what we learn when we include the characterized targets in the photos of vegetation. The area in the photo(s) which includes each target will have the pixel values (DNs) that we will use to represent the reflectivity of red (shown) or of NIR.
We now know two things about each target: 1) we know what percentage of the red and NIR energy that hits each target is reflected from it (Figure 2), and 2) for our photo of vegetation we know the DN in the pixels for each target (Figure 3, for both red and NIR). We can now establish a relationship between these two things. This relationship can be used to estimate how much red or NIR energy was coming from any point in the photo given only the DNs in each pixel. The relationship can be described mathematically with a regression of % reflectivity on DN. Percent reflectivity is assumed to be a good proxy for radiance or the amount of radiant energy coming from each point in the scene (in either the red or NIR spectral band).
Figure 4. To establish the relationship we have all been waiting for, we take the y axes from the two preceding figures and graph them against each other. A linear regression (the red line) based on those five points gives an average relationship which can be used to predict % reflectivity for any value (DN) of red (another regression is needed for NIR). Ideally, the regression line should pass through zero, but it might not and this could produce unrealistic NDVI values for low DNs. Similar artifacts can arise at the high end of the regression line.
This seems like a lot of work to get an NDVI image. Fortunately, most of the work can be done by free software that already exists. Ned Horning has written a plugin for Fiji that does all the math with all those DNs in all those pixels in all those photos. Although it is possible to make an NDVI-like image without doing this, it is not possible to know how your results compare with results from another day or another researcher. By tying your results to actual measures of reflectivity (the targets), all of your results are mathematically related to those of anyone else who does the same.
The remaining obstacle for most people is finding suitable targets. Commercial targets which have been spectrally characterized can be incredibly expensive. Characterizing your own targets is easy if you have access to a spectrometer which can measure the absolute brightness of both red and NIR. Public Lab type spectrometers do not allow this, although a simple upgrade and some readily available data about the red and NIR sensitivity of the camera used should make it possible.
Ta da. I haven't written the word calibration since the title.
excellent article Chris.
based on your experience you can advise us some targets already spectrally characterized, so we can finally start using the MAPIR in a rigorous and scientific mode.
Also you suggest that we can use the spectrometer of PublicLab (with some upgrades): what is it specifically ?? could you give us some additional indication ??
Thanks so much
Is this a question? Click here to post it to the Questions page.
There is a discussion of common materials with spectral data in the comments here: https://publiclab.org/notes/LaPa/03-31-2016/raspberry-noir-cam-sensors-to-detect-water-stress-of-the-plants-during-their-growing
The sensors used in consumer cameras are sensitive to NIR, but not really very sensitive. Public Lab spectrometers use consumer digital cameras, and most use a webcam. If you know exactly how sensitive the camera is to NIR compared to red, you can use that camera to photograph the diffraction pattern of light reflected from a potential calibration target. Then you can apply a correction factor based on the known camera sensitivity in the red and NIR bands. I have seen very few spectral curves for small cameras that include the NIR region after the IR block filter has been removed, but here is one for a common Canon PowerShot used by some Public Lab folks.
Above: The upper panel is the spectral sensitivity of a PowerShot A2200. The lower panel is the same camera's sensitivity after the IR block filter was removed.
Typically, spectral scans of potential targets are made with a light source of precisely known spectral characteristics, which presents another potential obstacle. It might be possible to use the sun for the light source. But I'm not sure about that.
Hi, I have taken some images directly from a Canon SX280 with a red filter. Then I calibrated it using the reflectance from red paper in the shade
. And then took some images. Then I used the infragram NDVI index calculator. But the results were not at all close to the desired results. Can you guide me on what changes do I need to make to get the desired results?
Your photo looks good. An NDVI image made with the Photomonitoring plugin for Fiji also looks good. That might be a better way to process your photos.
The NDVI map looks good. But is there anything that can be done to better the results. The information required is totally visible here. Any suggestion or tips that can help to get a better outcome?
You can try other colors for doing a custom white balance. See this note: https://publiclab.org/notes/Claytonb/08-13-2016/plant-health-ndvi-white-balance
Hi cfastie, I need a little help here. Following your suggestions I did all the calibrations required and the results were great. But in the last few days I have been trying to generate a NDVI of a park with dead leaves and healthy leaves to show a contrast. But the results have been very haphazard. I have used QGIS to raster the NDVI results. And the two images are taken at different calibrations.
The lamp post ,the wall and trunk is being detected as green healthy plants. Whereas the NIR image doesn't show anything of such spectrum in that area. Please help me out here.!!
The lamp post, the wall, and parts of the tree trunk are all very dark. It is a common artifact that NDVI computed for dark areas of photos is not reliable and is often unrealistically high. Note that in your photos the NDVI is highest in the more shaded areas of the trees and shrubs. In darker areas Red+NIR will be smaller, and if the difference between Red and NIR (NIR-Red) stays constant then NDVI will be larger in shaded areas.
@cfastie This is a very informative article. However, It isn't clear what is to be done after obtaining the final graph as you've shown. Are we going to change pixel values of the image to match the slope of the line we've obtained in the graph or the graph is only for our reference to have an idea of the spectral reflectance by different pixels in the image? Pardon me if this is a silly question but I am not clear about this.
Yes, we are going to change the pixel values in our photo. For each pixel in the photo we have taken of vegetation, we use the DN in the red channel to compute a new value using the regression in Figure 4. This becomes the value in the red channel of a new image. We do the same for the channel used for NIR. These new values should be an approximation of the radiance (energy) in specific wavelength bands (red and NIR) coming from each point in the scene Then the values in those two channels can be used to compute NDVI for each pixel.
@cfastie This is very interesting. Thank you!
You must be logged in to comment.