Public Lab Research note

  • 3

Automating ImageJ for particle image analysis

by SimonPyle |

What I want to do

Measuring air quality with passive particle collectors requires an accurate count of the size and number of particles collected. ImageJ (or Fiji) has tools that speed up this analysis, but a series of steps must be applied to each image for analysis. A single slide can result in dozens of photographs to cover the entire sample area, so we want to automate this process as much as possible.

ImageJ has a macro language somewhat akin to a simplified version of Java that includes a built-in recorder to automate actions. I wrote a script to automate the process of analyzing images.

These results then need to be entered into the spreadsheet developed by @mathew from these calculations.

My attempt and results

The macro (version 0.1) can be downloaded here.

Macro Workflow

This roughly follows @mathew's process of using ImageJ to process passive particle monitor samples

  • Prompt for a directory with images
    • all images from one sample need to be in a directory without other images. The macro will attempt to ignore files without image-type file suffixes.
  • Prompt for flatfield image for correction, if available
  • Calibrate image scale
    • prompt for image with calibration ruler,
    • draw a line between two points of known distance,
    • enter known distance and units
    • store scale to be applied to each image
  • Open first image to manually apply cropping to cut out vignetting
    • store crop setting to be applied to all images
  • Create new directory to save for analysis results and images
  • For each file in directory with a valid image extension ('tif', 'tiff', 'jpg', 'jpeg', 'bmp', 'fits', 'pgm', 'ppm', 'pbm','gif', 'png', 'jp2','psd’):
    • Apply flat field calibration
    • Apply cropping
    • Convert to 8-bit image
    • Apply auto threshold
    • Make binary
    • Fill holes
    • Analyze particles (and superimpose a particle count number on each particle in the saved image)
      • (each of these steps saves a separate image as an intermediate step)
  • Save the results table as a csv ("results.csv")
  • Save the area measured and image count as a separate csv ("area.csv)



I included some image correction to handle image calibration.

Vignetting is handled by selecting a region of interest in one image; that crop area is then applied to all of the images in the sample.

Flatfield image correction is available. If the sample includes an image taken with an empty slide (a "flat-field), that image can be used to remove uneven illumination from the photos.

If there is not an existing flat-field image, the macro can generate a pseudo flat-field. The macro asks for a "radius value" to generate a gaussian blurred version of the image. This radius needs to be large enough to generate a blurred image where the individual particles are no longer visible.

@mathew found that extracting the blue channel before processing images helped, so that is an option as well.

There are two options for calibrating the image scale. You can either measure a scale in an image or enter a known value of pixels per inch. (The latter option may be useful in ensuring consistency across runs of the macro, especially as the needed values are exported to the "area.csv" file at the end of each run.)

Questions and next steps

An approach that might be worth incorporating is Rolling-Ball background correction. This could help reduce issues with uneven illumination.

A big question is how to remove potential false-positives from the image. Dust in the microscope or on the slide can cause dark blotches, or uneven illumination can affect the results of image processing. Flatfield correction is a start, but only affects false positives that are consistent across images.

Here is an image of what I think are potential false positives (dust?). Flat-field correction will handle the stable dust, but not the dust that "moves" across images.


It might be good for the macro to save a pseudo flat-field when it's generated for future reference. When using a pseudo flat-field, I'm running the "Mean..." command with various radius values until I find one that sufficiently blurs the image, and then use that value to run the macro.

Thanks to @mathew & @AmberWise for their input and guidance!

Post your attempt to replicate this activity 

Replications (0)

None yet. Be the first to post one!

This is marked this as an activity for others to try.
Try it now   Click here to add some more details.

image-processing dust silica pm microscope passive-particle-monitors passive-pm microscopes

activity:microscope activity:microscopes


That's a great writeup! thanks Simon.

If anyone needs an image set to play with, please e-mail me, and I can share the files. They're too large to host here.

Although we are hopeful that Simon's strategies listed here can get us to a particle count, If they aren't sufficient Rongjun Qin recommended we look at a more involved method not currently in ImageJ:

Rongjun Qin, Xin Huang, Armin Gruen and Gerhard Schmitt (2015). Object-based 3-D Building Change Detection on Multitemporal Stereo Images. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing. 5 (8), 2125-2137

Hi! Is Simon's macro still at the URL listed above?

Or is there a more up-to-date version, potentially open sourced? Thanks!

Is there any sample input/output sets from going through this process, manually or automatically?

I was thinking that a good test/spec might include at least one sample set with:

Input: an image set that's been manually processed

Output: a results.csv and area.csv based on a manual processing of the input image

One of the sample sets could be from the original cited research, ideally -- but we could also include "difficult" image sets to ensure the method developed robustly replicates the manual method.

It'd also helpful to run a new set through as well, where the outputs are not known by the developer of the automated process, to see if it can generate close enough outputs without knowing them in advance.

Then, a good test of the success of an automated system would be based on how close the output "results.csv" and "area.csv" are, in comparison to the supplied ones.

Great detail on the stepwise work here, thanks!

There is a large folder of sample image sets avaialble. I put a smaller set together here:

@warren: The link above was the most recent version, though I just added a GNU GPLv3 license and created a github repository:

Let me know if you have questions or just hack away!

(@mathew, does Public Lab have preferred best practices for software development and licensing?)

You must be logged in to comment.