Public Lab Research note


A little background work for Trapa classification from color

by ttaylor | August 16, 2013 01:14 16 Aug 01:14 | #9019 | #9019

Here Chris Fastie was speculating that Trapa Natans should be detectable from balloon mapping data on the basis of it's blue green and/or leaf size. Classification from color can be problematic, as color depends on a number of factors including the spectral content of the illumination (which depends on the weather, the season and the time of day etc), but all of these images were taken at about the same time on the same day, so I wanted to see what I can do. What follows can be considered a crude color-based classifier. The good thing about it is that I understand what my decision criteria are in terms of simple statements about color. Much better can probably be done with a more sophisticated classification scheme.

This image from Chris has a nice large patch of Trapa in the large circle at the bottom center. To sum up what I did (more technical details on request) I cut out smallest rectangle of that image that contains that circle.

chip.jpg

and measured the chromaticity (color) and luminance (brightness) of the pixels in this chip. You could do this by hand in imageJ by selecting a rectangle from the dense Trapa in the circle, e.g. this one

colorchip.jpg

pasting into a new RGB color image, using Color->Split Channels to separate out the R,G,B grayscale images and then for each save as a text image to get the 8bit R,G,B values for each pixel--chromaticity can computed in an excel spread sheet for each pixel as {R/(R+G+B),G/(R+G+B)} for each pixel. The rectangle has average chromaticity values {0.311, 0.478} with standard deviations {.007, .008}, while a pixel I picked at random just outside the circle had chromaticity values more than four standard deviations from the rectangle average.

So how good is chromaticity at detecting Trapa? Good, not perfect is my non-technical summation. To visualize how good it is and isn't, below are two animated GIFs with three images each. The first image in each is one of Chris Fastie's. For the the second image, I set what I thought were reasonable thresholds for saying that the the chromaticity of a pixels was close enough to Trapa, and then zeroed to black all pixels that didn't meet that criterion. For the third image I refined the criterion to say in addition that the luminance had to be close to the luminance of Trapa as well--and this seems to have improved the results. (I wanted to keep Chris' circles, so there are also some odd white pixels also show up--the boat, flowers & glare, I think)

ColorClassifiers_.GIF

In the above image#2 we can see that chromaticity finds the patch that I used to grab the color, and has something about the same chromaticity in the other circles. It also finds stuff of about the same chromaticity in many places. In particular it finds some stuff below the lower right lawn chair and to the right of the lower right small circle that is clearly not Trapa, and is appreciably darker (smaller luminance), even though the chromaticity is correct. Image #3 above selects for pixels matching luminance as well as chromaticity. Some of the circles that were identified as having Trapa do not have pixels that match in both chromaticity and luminance. Since I'm not sure what is the ground truth, I'm not sure if I've thrown out the baby with the bathwater. It would make sense though that the false negative rate would go up if I increase the number of conditions to satisfy.

ClassifierTests2.GIF

The above higher resolution image provides a test of color-based classification. Again, the second image is chromaticity-based, and the third image is both chromaticity and luminance based. The good news is that the circles where we can clearly visually distinguish Trapa in the first image based on color and leaf shape are clearly and correctly identified as having Trapa in the second and third images. Some of the circles where there is more going on (e.g. upper right circle, where there may be some larger-leafed plant competing with the Trapa) may not be in agreement--these may be false negatives. In the second image there are some pixels that are clearly not Trapa with the same chromaticity but not the same luminance as Trapa that are eliminated from the third image. In both images #2 and #3 there are many locations that pass the chromaticity or chromaticity and luminance requirements, that are identified as being Trapa--these may be false positives.


9 Comments

This is great information. In the first GIF, the second largest circle, near the top, has a nice large patch of pure Trapa that appears to be similar to the bigger patch you used for training. I guess it's a bad sign that the upper patch was not flagged very well. In the second GIF, it looks like a lot of the white specks in the chromaticity plus luminance image are the flowers of white pond lily. I guess that's a good sign.

Except for the big pure Trapa patches, when I drew all of those circles my brain had to focus on the repeating pattern of leaves of Trapa which are smaller than the leaves of other plants there. Maybe an algorithm is going to have to do the same thing.

Thanks for explaining this approach. This is really good progress.

Reply to this comment...


Chris, that's interesting. That patch is significantly less blue that the training patch. I wonder why. Maybe different specularity? From your EXIF data and this website I can get the angle of the sun. Do you know how figure the orientation of the camera?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Camera orientation and position (relative to the photo scene) is reconstructed when a structure-from-motion model is made using lots of photos taken from multiple angles. This model would have to be georeferenced so it knew which way was north. Then you could assign a sun angle to each pixel in each photo. Unfortunately, the sun was not out consistently during this balloon photography session, so photos vary a lot in in how much sun reflections are an issue. The photos also vary a lot in how much they were illuminated by direct sun, cloudy sky, and blue sky, so color balance is inconsistent among photos. I guess it is not so surprising that color alone does not distinguish Trapa well. The second GIF above suffers a lot from direct sun reflections, but it is one of the nicer photos of Trapa.

A crude index of specularity could be generated if compass orientation was determined for each photo and we assumed the camera was pointed vertically each time. I'm not sure how helpful that would be.

Reply to this comment...


Hi Tom, Chris,

(Tom, I'm Charlie, the UMass Amherst faculty who was out in the canoe with Chris. Thanks for doing this work!)

Chris - just to be clear, the image that Tom was working with was one that had "stitched together" from several photos taken over the morning, right? If so, my hunch too is that the less blue patch might be because of change in the illumination differences across images.

Is there one, single high altitude image that has both patches in it?

It would be interesting to hear what Ned Horning thinks about this.

-- cheers, Charlie Schweik

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


HI all,

I thought I made a comment on this earlier today but apparently I did something wrong (again). It's great to see more people digging into the classification. It's possible that some patches of water chestnut look different than others due to "contamination" from nearby pixels. This might be exacerbated by the JPEG compression but in any case scattering on or near the surface and within the camera optics will have some influence but I still don't have a good sense about how significant it is. As far a trying to normalize the image by correcting for illumination differences I'm not sure that will be possible since we don't have information about the orientation of the features (leaves) being imaged. There is often a hot-spot in aerial images that can be reduced if you have sun angle and camera orientation data but since most of the vegetation in these photos are on the surface I don't expect the hot-spot has much influence.

Reply to this comment...


Thanks Ned for your ongoing insight and expertise.

So one thought I had related to Tom's workflow would be to see if we have a broad scale image that has both the large "training" patch and the smaller patch in the top left in it, and see if the technique Tom implemented still shows differences. This would control for lighting/timing/angle differences (although if at a higher altitude we may be losing spectral information?).

Trying to help problem solve -- hope it helps a little!

Charlie

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Hi Charlie,

The better we can represent the variability of the feature we are trying to classify the more accurate the result should be. In general. the more training data the better. I forgot to mention in my previous post that running a smoothing filter (I've been using a Gaussian filter) over the image before classifying it seems to help a bit. Now that we have a few (at least three?) people playing with classification approaches it would be good to set up a way to coordinate the work. I requested a web share from the Museum although that's probably not ideal since it seems people generally have an aversion to using them. The idea is to have a set of images that we can all work with and then we can post our results and methods to compare and build upon. Does anyone know of a better platform for doing this sort of shared/open research? I think a shared research platform has a lot of potential to produce great work.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


the large scale image is here, 2736x3648 pixels. The patch of Trapa that I "trained" on and the patch in the upper center really do have significantly different colors. (I made another graphic to illustrate this, but it looks like I can't post this here). On the right side of that image, on the clear water, I seem to be seeing reflections of the clouds--do you agree? Could it be that the cloud cover was so non-uniform that the patch of Trapa in the upper image and my training patch had different illumination?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Charlie: The images everyone is working on are single photos from the cameras (or multispectral versions). The problematic color variations are within individual images. Tom: There are definitely clouds reflected in the water on the left side of the photo 7075, and in some other photos. Most of the leaves of Trapa and other plants in these photos are wet, so light can reflect off their surface when the angle is right. Even in a single photo, the northern sky can be reflected off the surface in the northern part of the photo, and the southern sky with sun or bright clouds can reflect off the southern part of the photo. I think to embed an image in a comment the image has to be online already, so you can insert a Flickr image with code from there.

Reply to this comment...


Login to comment.