Public Lab Research note


Analytics on aerial images -- using filesize

by warren | | 2,329 views | 6 comments |

Read more: publiclab.org/n/2589


Update, June 2014: This has now been implemented as part of MapKnitter, known as MapKnitter Sorter. Please give it a try there.


I'm looking for some easy ways to sort blurry images from crisp ones (and good from bad) for a whole batch of images. My first thought was -- run a quick analysis on each image for blurriness, with a script. But for 2000-4000 images, that takes too long!!

OK, how about this: just look at the filesize! I already noticed that all-black or all-blurry images at the beginning and end of a flight are much smaller JPGs since less detailed photos will compress better. I tried graphing this to see if there's a pattern. My graph is here:

http://archive.publiclaboratory.org/aerial-image-analytics/

Open source code (messy): https://github.com/jywarren/aerial-image-analytics

My other thought (also graphed) was to use the temperature of the camera, as stored in the EXIF tags -- an idea explored in an older post -- to see if we could calculate the altitude very roughly and trim out all images under a certain threshold -- say a hundred feet or lower. The above graph shows that maybe the temperature system is no good -- it just kept going up and up, and I think that's just because the camera is getting hot in the sun and from running continuously. Jessi even said "the camera's hot" when we took it down after the flight at UMBC's test forest.

So these are just a few ideas but it'd be neat to have a script to quickly dump 1/3 of your images if they're blurry -- for example:

Blurry kite photo from the ground

Update

I never followed up with another post about this, but the tool was later refined quite a bit and a javascript 'drag and drop' interface made this very easy and effective. To use it, visit:

https://mapknitter.org/sorter/ (updated address)

Here are some screenshots of the process:

  1. Drag all the images into the drop zone:

Screen_Shot_2013-09-03_at_8.43.18_AM.png

  1. They'll be graphed by filename order (x) and file size (y). See how turbulence can create wildly varying filesize... but also consider that if some visually complicated feature enters the scene, that can also create a higher filesize since the image will not compress as well:

Screen_Shot_2013-09-03_at_8.43.24_AM.png

  1. Enter a "cutoff size" in megabytes and press "Apply" to highlight filenames of images above that size:

Screen_Shot_2013-09-03_at_8.45.02_AM.png


I did this Help out by offering feedback!


People who did this (0)

None yet. Be the first to post one!


6 Comments

I have found this to be quite true! It's a neat phenomena. I dont think it can be used as a golden rule though. I can recall images that have one dominant feature like water, cement, or a white roof that while sharp can fall back in order by size. But its mostly true and is very useful if you want to just take a glance at a large set or if you are waiting for a set to transfer and you want to see the best imagery.


I really like the idea of a script that can rank each image for motion blur or focus blur. Figuring out which of your photos are blurry before you include them in a map (MapKnitter), stitched image (ICE), or structure from motion (Hypr3D) requires examining each one closely. A script could substantially improve this type of workflow and the quality of the final product. Ideally, the script could distinguish between motion blur and poor focus.

Maybe Ned knows if ImageJ can do this.


So I'd like to try this technique to select a subset of images to make a hypr3d model with -- since it only allows a few hundred images to be uploaded. I'm going to futz with the ruby script to see if I can automate the whole process, but first I'd like to do some tests to see if it actually works.

My plan is to test 2 scenarios:

  1. Copy every 50th photo (should be about 40 pictures) into a subset, and upload them to MapMill, and ask people to rank them by visual analysis. Then compare to their ranking by filesize.

  2. Do the same thing, but for 2 separate shorter runs -- say, 20 consecutive images. My idea is that although Stewart is right -- images of a rooftop or water will be lower filesize in general -- that for a short run, there will still be variability and in a set of 20 consecutive images, crisp images will still be larger than the local average. Repeat the same test in MapMill.

How does that sound as a test? I can go to the mailing list for help in the manual sorting.

Is this a question? Click here to post it to the Questions page.



nice plan http://archive.publiclaboratory.org/maryland/2012-06-13-maryland-baltimore-umbc-kite/raw/vis/?C=S;O=D looking at the set and it seems pretty typical in terms of observing this size to crispness association

Is this a question? Click here to post it to the Questions page.


Update - I moved this tool to: https://mapknitter.org/sorter/


Login to comment.

Public Lab is open for anyone and will always be free. By signing up you'll joined a diverse group of community researchers and tapped into a lot of grassroots expertise.

Sign up