Public Lab Research note


Calibrating raw images - a step toward JPEG calibration?

by nedhorning | June 23, 2014 16:01 23 Jun 16:01 | #10607 | #10607

Introduction

In this research note I'll go over some calibration tests I've been doing using RAW format images from my Canon A2200 with a red Wratten 25A filter in place of the hot mirror. A raw image is an image that represents the signal recorded by a camera's sensor and has not been modified to make an aesthetically pleasing image often output in a JPEG format. Working with raw images has a number of advantages over processed JPEG images. A primary advantage for calibration is that the digital number (DN) recorded at the sensor has a linear relation to the intensity of light hitting the sensor. In other words, if the intensity of light hiting the sensor doubles the pixel value recorded in the raw image will also double. When an image is processed inside the camera to make a JPEG image that looks nice we loose that linear relationship. As noted in a previous research note (http://publiclab.org/notes/nedhorning/05-01-2014/improved-diy-nir-camera-calibration) a gamma correction can be applied to the image so a linear relationship is established. It might be good to read that previous note since I'll refer to that frequently throughout this note.

Methods

For this work I acquired a raw image of six targets with known reflectance values and then used the same process I used for the earlier work to get an average pixel value for each calibration target [Figure 1].

IMG_0020.JPG Figure 1: Photo of the target layout

I then plotted the average raw value against the measured reflectance and calculated a linear regression [Figure 2]. As you can see in the plot there is a very good linear fit which is what I had expected.

RedNIR_Plots.png Figure 2: Plot of relationship between red (650nm) reflectance vs red channel raw pixel values and near-infrared (850nm) vs blue channel raw pixel values.

Using the linear regression coefficients I calibrated the raw image bands into reflectance bands. The calibrated bands look the same as the uncalibrated bands but the pixel values are in percent reflectance. Calculating NDVI from the calibrated bands produced decent results bu the NDVI values seemed a bit low [Figure 3]. The grass seemed a little stressed since the image was taken after a few days with no rain and my lawn is not very productive but the NDVI values still seemed quite low.

LookUpTableSmall.jpg Look up table used for the color NDVI images

red1029NoSubtract_NDVI_1.jpg Figure 3: NDVI image after calibrating the red and blue channels for reflectance without pre-processing

I expected the results could be improved by removing the NIR “contamination” that was occurring in the red channel. With the red filter the blue channel should be recording nearly “pure” NIR light and the red channel a mixture of red and NIR light [Figure 4].

spectral-response-ccd.jpg Figure 4: Spectral response of a typical CCD camera sensor (downloaded from: http://www.astrosurf.com/luxorion/photo-ir-uv3.htm)

If roughly the same amount of NIR light was hitting the blue and red sensors I should be able to remove the NIR effect for the red sensors by subtracting the blue pixel value from the red pixel values. Doing that before calibrating the images significantly improved the results. Since I wasn't sure if the amount of NIR light recorded by the blue and red sensors was the same I added a multiplier to the formula so I could adjust the amount of NIR being subtracted from the red pixels. The equation was [red – blue * multiplier]. I think two issues are present that can be dealt with, at least to some extent, using the multiplier. One is that I'm not sure if the pixel values recorded by the different sensor (red, green, blue) represent the same intensity. In other words I'm not sure if a certain intensity of red light would give the same pixel value as an identical intensity of blue light. Also, I do not know the sensor response for my camera so I don't know how much NIR light passes the red sensors vs. the blue sensors.

To get a rough idea what a multiplier value should be I did some quick tests photographing NIR LEDs with ~850nm and 950nm emittance and in those tests the pixel values in the blue channel were about 1.1 times the values in the red channel. This would mean a multiplier of 0.9 should work. Using that multiplier I got the following result [Figure 5] which seems to be an improvement.

red1029Mult0_9_NDVI_1.jpg

Figure 5: NDVI image after subtracting (blue * 0.9) from the red channel and then calibrating for reflectance

The next step will be to test the results with a field spectrometer to see if the NDVI values are in the “correct” range. I don't have a field spectrometer so maybe someone else would be up for doing field testing? It's possible the multiplier will need to be adjusted but without a spectrometer providing reference data it's difficult to tell for sure. In any case this continues to look promissing.

Raw processing in ImageJ/Fiji

To process raw images I installed dcraw, a command line program for processing raw images: http://en.wikipedia.org/wiki/Dcraw. The command I use is: dcraw -D -W -4 /AMNH/PhotoMonitoring/Red25AFilter/BlockLED/Raw/CRW_0979.DNG

That creates a “pgm” file that I can open in ImageJ. The “pgm” file is an actual image of what the sensor records. If you look at Figure 6 below you'll see the raw pixels in the pattern: G-R-G-R-G-R... (R=red, G=green, B=blue) B-G-B-G-B-G....

CRW_1029_subsetRaw.png Figure 6: Subset of a raw image displaying the pixels with a Bayer pattern from a camera sensor - black pixels are dead detectors on the sensor.

There is a “DCRaw Reader” plugin for ImageJ but I prefer the command-line utility. Once the image is open in ImageJ I run the “Debayer image” plugin (http://www.umanitoba.ca/faculties/science/astronomy/jwest/plugins.html) to take the DNG image with the Bayer pattern and convert it into red, green, and blue layers. Since the CHDK raw images I get have a pixel range from 0 to 4095 I save each band as an integer TIFF image so I can keep the full range of pixel values since many image formats are limited to values from 0 - 255. I then put the images in a stack and extract pixels for a rectangular region of interest and calculate the mean value using the ImageJ histogram function as described in the previous note.

I wrote a script in R to calculate the regression, calibrate the image bands and create an NDVI image. This can be done in ImageJ or most any other software that does image processing but it's easier for me to use R.

Next steps for JPEG processing

Although this note focused on calibrating raw images my goal is still to develop a reasonably simple calibration method for calibrating JPEG images. At this point there are several options. 1. Use multiple calibration targets that provide reflectance values for low medium and high reflectance for the bands you want to calibrate and then use those to estimate the gamma correction as was done in the previous research note. 2. Calibrate a raw image using bright and dark calibration targets then use the calibrated bands to calibrate the JPEG image. This might be my next research note. 3. Use bright and dark calibration targets and adjust the gamma correction iteratively using a visual assessment of the result. This would be a bit complicated and arbitrary since both the red and NIR bands could have different gamma correction factors. 4. Figure out how to convert the RGB color space JPEG image back to raw values and then use bright and dark calibration targets for the calibration. This seems to be the ideal option but I'm not certain it's possible. To accomplish this I (or someone else) needs to determine the process to invert the processing that goes on in the camera. That will take some more research.


11 Comments

Thanks for all the work you are doing to improve this approach.

Reply to this comment...


Those regressions look really encouraging. I guess if I took some photos with my own A2200 and Wratten 25A filter (the same model and filter type you used) and captured RAW images, you would be able to process them into believable NDVI with values that were comparable to your NDVI. And I would not even have to take photos of your spectral targets. I guess we are still not at the point where I would want to do the processing myself unless maybe there was a Fiji plugin just for this camera.

If you can get this to work for jpegs, do you think there is a chance that a Fiji plugin (or Infragram.org routine) could take photos from any camera model and filter combination that had been calibrated and with only the knowledge about which camera and filter had been used, make calibrated NDVI? Would it matter what white balance had been used on those cameras? Would those photos have to include targets?

The possibilities are exciting.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Chris - I want to test to see how well calibration transfers between cameras and also between different camera settings and incident light (clear vs overcast). I expect NDVI will deal with those variations somewhat but testing will give a sense of how well. For best results you would want to take an photo of a calibration target before each mission so you could calibrate for a specific camera, settings and incident light. I think each white balance will require it's own calibration. I'm trying to work out logic to make that a simple process but progress is slow.

Reply to this comment...


Ned, I have been working in the spectrometry field for many years and have been involved in calibrating various hyperspectral sensors. I plan on using cameras to look at veg so I searched to find what folks were doing with regard to calibration. That's how I came across your work. As I have read through your posts I noted the progression I expected - from going to different types of objects to cards of various gray levels. And also increasing consideration of what wavelengths each channel is actually sensing. The fact that the visible channels are also sampling NIR is a bummer. What you are doing is what we call Empirical Line Method of calibration in the remote sensing world. By the way - good job on getting Mary Martin to take reflectance measurements for you. I'm fortunate to have a field spectrometer for my business. So I plan on using it before each collection if possible. One thing I will mention is that healthy vegetation (with good leaf structure) is a very efficient scatterer of NIR wavelengths in particular (more so than red, green, and blue wavelengths). So any object near vegetation will have extra NIR wavelengths scattered on it. In my experience with NDVI values with Landsat etc., a typical value between veg and non veg is often about 0.3. Cheers, Joe

Reply to this comment...


Hi Ned, This is a really nice work.

I have a couple of questions about the conversion between the DNs of JPEG images and that of raw images. The Figure 2 of this note shows that there is a perfect linear regression between raw DNs and reflectance, which makes sense since the more reflective the more radiance received by the camera. Therefore, Reflectance = K * DN_raw. A figure in your previous note shows a nonlinear relationship between Reflectance and the DNs of JPEG images due to the gamma correction, i.e., DN_jpeg = Gamma(Reflectance). Thus, I guess there will be a nonlinear relationship (similar to the Gamma function) between DN_raw and DN_jpeg. i.e., DN_jpeg=Gamma(DN_raw).

To validate this idea, I used a Canon EOS 650D NDVI camera to take the raw (CR2) and jpeg format images of ground vegetation. This camera captures the three spectral bands: NIR (R channel), Green (G channel) and Blue (B channel). For the raw image, the bayer image was extracted by using Adobe DN converter. Then, its R, G and B channel values was extracted. Then, I did a pixel-based comparison between the DNs of the raw image and the DNs of the jpeg image. For example, on the bayer image the R value appears at the positions of (1,1), (3,1) and so on. These values were compared with those pixels at the same positions on the jpeg image. However, no correlation was found between them. So I am wondering if there is something wrong with my logic? Do you have any suggestions? Thanks in advance. Nanfeng

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Also, I put three figures to show the raw (i.e., bayer), jpg images and the jpg-raw DN correlation. 1.jpg

3.jpg

2.jpg

Reply to this comment...


Hi Nanfeng,

I'm not certain why the results look the way they do but I have a feeling that it could be a result of doing pixel-by-pixel comparison. The JPEG image is degraded significantly through anti-aliasing and that would almost certainly give you some spurious results. Would it be possible for you compare a few small areas in which you calculate the average pixel value over an area with several (50 or more) pixels? If you could take a few relatively homogeneous areas to start with that represent light through dark patches in the image that would give you an idea if the results are better correlated. For example you could take a sample on the white and black part of the bar, a rock , some vegetation like a bigger leaf.... You would need to figure out how to convert the Bayer image into individual bands without holes. In ImageJ/FIJI there is a plugin I use for that. You can use a nearest neighbor resampling or something like bilinear. I expect that for this test either would be fine. I'm traveling at the moment but will try to read this when I have a chance to see if you are making progress.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Cannot log in with my old account. Thank you very much for your reply and suggestions. I think you well explained what I am confused about. Yes, I did select some homogeneous areas from the image and found the gamma relationship between jpeg DN and raw DN values. Thank you again. Nanfeng

Reply to this comment...


http://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XL-1-W4/207/2015/isprsarchives-XL-1-W4-207-2015.pdf

This paper has a method for determining the spectral response of a digital camera. They use ImageJ and have a precise list of scripts. The researchers use a monochromator, but it could be possible to create a similar set up using a variety of LEDs of known wavelength and a shroud.

Reply to this comment...


Several years ago I was thinking of how to build a DIY integrating sphere/monochromator setup but never pursued it. You would want something other than just a shroud. I was thinking of an old basket ball with the inside coated with a matte white paint or perhaps even an inexpensive teflon paint. You need a surface that will reflect nearly 100% very diffusely. You also need to have some way to very accurately measure the intensity of the light (from a stable light source) in the integrating sphere. Most LEDs emit a fairly broad spectrum and the cut-offs are not very abrupt so they aren't ideal. It would be neat to see what someone could put together and then it could be compared with a lab setup. In the meantime a calibration protocol using targets seems to be the next best thing.

Reply to this comment...


@nedhorning Today I rescued an old light source and integrating sphere from the scrap heap on campus. I don't think the light source is a monochromator but I will see how this integrating sphere can be put to use. I will post up my results.

Reply to this comment...


Login to comment.