Following the Jan 9th air quality open call I wanted to see what can be done with a microscope slide image of airborne particles. Inspired by the work started by Mathew and Stevie a couple of years ago I set out to try and get a similar process running on Python using openCV and skimage. Based on an old tutorial to detect coins I repurposed it for a single slide example. Once more example images are obtained this process could be made more robust, could also be a great candidate for DL!
Below is a code walkthrough:
1. load image, crop out the area with the scale on it.
2. Use Sobel edge detection to find particles.
3. Use a simple threshold to binarize the edges image.
4. Label binarized features.
5. Show area distribution of the particles
6. Sort and select only features which are larger than sizeTh (4 pX).
- NOTE: I made a lazy assumption that each pixel is 1 um, this parameter can be adapted in the code to show real scale, this is a pixel scale - NOT um!
Largest:
Smallest:
6. TBC (use fractal dimension to detect if a particle is round or jagged). find a way to avoid false detection of two close particles, and from false detection of a section of large particles being reidentified by the label function.
39 Comments
Hi, some days ago I talked about this topic with a friend of mine... the usecase is ongoing pollen analysis. Do you know whether there are pollen micro images which can be used to classify a particle as a pollen... and even better, also to classify from which tree/plant it comes?
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
This sounds very interesting, but I don't have any raw data. Maybe join the Tue. open call? I think this can be a great study case for ML/DL. Googling a bit got me to this 1939 post form pop-sci: https://books.google.co.il/books?id=7SwDAAAAMBAJ&pg=PA188&as_brr=1&cd=2&redir_esc=y#v=onepage&q&f=false
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
Hi, I'm trying to think how we can support and connect this work to the microscope team more -- could we post a call for images to try to run through this workflow? Both reference images and test images from DIY microscopes?
Maybe as a follow-on to this question: https://publiclab.org/questions/gretchengehrke/09-21-2017/is-it-possible-to-discern-jagged-from-rounded-particles-using-a-diy-microscope
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
In the comment section of the post linked below there is a link to a large cohort of images, thou I'm not sure if they represent airborne dust particles. Having looked at the posts from the last couple of years by many Public lab collaborators I feel that this issue has been dealt with rather well, so I'm not sure if the small code snippet I wrote is that relevant. That being said - I will be happy to upload it somewhere, is there a section of the git where it should go?
https://publiclab.org/notes/SimonPyle/05-13-2016/automating-imagej-for-particle-image-analysis
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
Oh yes please -- maybe make a new repository for now? Or use http://gist.github.com if it's just a few files.
Could you link to it from here? I think having a second implementation -- and a more standalone one -- could be quite valuable.
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
For now, my code and iamge excerpt can be found here, just untill I figure out how to have two git users on my machine, https://drive.google.com/drive/folders/1X0IK45IXxDBqPuxKQUqOQp-5BeLDu9zw?usp=sharing
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
@amirberAgain - a friend just posted this question, and I'm curious if you think any of the steps you wrote out are applicable to that kind of analysis as well -- https://publiclab.org/questions/jlev/02-01-2018/how-can-i-identify-bits-of-plastic-from-the-beach-in-an-image
@dakoller it'd be great to hear more about your interest in pollen too -- would you like to post it as a related question using the link under your comment?
In general, I'm trying to think through how these steps could be laid out one by one, for potentially solving in a modular way, like using ImageSequencer, in parallel to what @amirberAgain has done in Python -- with an eye to reusability across different similar challenges -- pollen, microplastics, dust. What do you think?
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
@amirberAgain as @warren suggested i am trying to implement this system in js for Image Sequencer, we already have modules for canny edge-detection (which already includes thresholding), can you please provide a little bit of insight on what metadata we want to associate with the image and how should we go about this, thanks a ton
Reply to this comment...
Log in to comment
@tech4gt note that the canny edge detection is an initial important part of a process. Extracting the actual location and sizes are found using the label function od Scikit learn's morphology module. http://scikit-image.org/docs/dev/auto_examples/segmentation/plot_label.html
I don't know how to apply SKL in JS.
Reply to this comment...
Log in to comment
@amirberAgain What metadata are we trying to get from the image, like are or perimeter? Also will the scale be already attached to the image, that part is not clear to me?
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
@amirberAgain i found this library, can this be of help https://github.com/PorkShoulderHolder/morph
Reply to this comment...
Log in to comment
Sorry, but I don't think this will provide with an "out of the box" solution. I noticed that the reference I gave you was not the right one, have a look at this one: http://scikit-image.org/docs/dev/api/skimage.morphology.html#skimage.morphology.label
Reply to this comment...
Log in to comment
@amirberAgain can you please also tell me the data we want to extract from the image, like area or number of particles, also will the input image already have a scale on it?
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
@tech4gt I think @stevie or one of the Microscopy group participants will be able to better explain what they are looking for. I believe a list of particles their size (in pixel) and possibly a measure of round VS jagged would be great. But they should be the address.
Reply to this comment...
Log in to comment
Oh, thanks a ton :D
Reply to this comment...
Log in to comment
@stevie can you please guide me a little here, what are the exact data we are looking for from the image and is the scale already attached on the input image? Thanks :)
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
Hi! Yes! Particle size is of interest, also shape, as mentioned (round vs. jagged) is interesting as well.
Adding in @dswenson and @partsandcrafts as well.
Reply to this comment...
Log in to comment
@stevie is the scale already attached on the input image?
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
For the microscope images, I don't think all the slides have a scale on them. It's not part of the setup to get it up and running, but you could use a calibration slide.
Reply to this comment...
Log in to comment
@stevie so i can start with assuming a scale and then later we can modify the code to incorporate the slide generation maybe?
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
I think that sounds good!
Reply to this comment...
Log in to comment
BTW FYI: I started looking in to contur operations on selected particles as a way to evaluate how round or jagged each particle is, more here: http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_contours/py_contour_features/py_contour_features.html
Reply to this comment...
Log in to comment
@amirberAgain I have come up with a system for our JS project to achieve basic particle detection, this is my first attempt, please provide your review
First we apply edge-detect to the image to binarize it, We can have a foreground pixel and background pixel associated with the output(Since we decided that image can be on black background as well)
We can then apply connected component labelling on the edge-detected image(I'll write a new module for this which will take in foreground pixel values and background pixel values and a binary image)
We can then count the number of pixels in each of the components and that will give us the perimeter in pixels
For area I have devised a method (but it will not be exact if particles are concave), we can run that for each component and it'll fill each component and then again we can count the number of foreground pixels in each component and get the area in pixels2 (squared)
Area and perimeter can be converted using a given scale to actual dimensions.
Algorithm:
Thanks :)
Reply to this comment...
Log in to comment
Hi @Tech4gt, Sorry for taking so long to get back to you. It sounds like you have a good idea on how to approach this problem. But I think that writing a full end to end might not be the best use of your time (unless you want to dive into Java image analysis). If you look at the documentation of the SciKit-Image Python package, specifically under morphology you should be able to see how this was solved previously, usually, these packages are based on one or more algorithm which was published in a paper and would rely on other more basic work as well. Have a look here for documentation on the way the label function works: http://scikit-image.org/docs/dev/api/skimage.morphology.html#skimage.morphology.label
I don't mean to discourage you but to suggest that this might have been previously solved in an easy to implement way.
As for the step where you are looking at the number of pixels which are defining the perimeter if you correlate that with the area of a circle which closes that shape you should get a good correlation to the jaggedness of the particle.
Reply to this comment...
Log in to comment
This example can now be run as a colab iPython notebook! https://colab.research.google.com/drive/1yC5VNCi2gfwMr57jMXkK5CN51nosKONk
Reply to this comment...
Log in to comment
@amirberAgain that sounds fair, actually I got a little carried away, I'll go through the algos you mentioned for calculating area. That means that general workflow is fine right? I'll implement uptill connected component labelling and then go through the algos you mentioned, would that be alright? CC @warren
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
I'd recommend you try and see how to interface with openCV, if it works - great, otherwise implement lower level algos. As for the order and will we discover something additional is needed, it's hard for me to promise anything on that.
Reply to this comment...
Log in to comment
@amirberAgain I'll try that and share result here as soon as possible :)
Reply to this comment...
Log in to comment
@tech4gt I recently saw that OpenCV has a JS implementation, definitely worth checking this out: https://docs.opencv.org/3.4.1/d5/d10/tutorial_js_root.html
Reply to this comment...
Log in to comment
Wow, this is very impressive. It may also be interesting because @maggpi will be using some OpenCV, i believe, and potentially work @maggpi does could be reproduced in Image Sequencer via an OpenCV module or set of modules.
Thanks @amirberAgain!!! 😃
Reply to this comment...
Log in to comment
Thanks @amirberAgain @warren so should I stop working on the ground up algorithm ?
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
@warren, I don't know JS at all and am not versed enough in using RPi, but there are some good sources for RPi which I pointed @maggpi to at her page, I hope that she gets up and running with it I'll be able to learn from her and that we will write up some summary/comparison.
Reply to this comment...
Log in to comment
No, not at all! I think that for each step you described above, it's possible to look at OpenCV.js to see if there's an available module or feature which could be applied to just that step. That way, approaching the overall project could generate a set of useful OpenCV-based modules for use in other projects too -- how does that sound?
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
But there may be cases where integrating OpenCV.js is not feasible -- we'll just have to find out! I think your list of steps above @tech4gt could make for a good planning issue, with ref. to the OpenCV.js library as a possible implementation of some of the steps.
Reply to this comment...
Log in to comment
@warren I get it!! We try to use openCV as far as possible and base our modules on it and if not possible we build that ground up!! Super excited :)
Reply to this comment...
Log in to comment
This is getting more and more interesting!! Can't wait wait to get started!!!
Reply to this comment...
Log in to comment
Great conversation. I don’t really have any experience but have tried to research the topic and follow thru on amirberAgain comments.
My comments:
-Python seems to offer several options that support Computer Vision processing. These include pillow /PIL (Public Image Library)( http://python-pillow.org/), Matplotlib (https://matplotlib.org/), Numby (http://www.numpy.org/), SciPi (https://www.scipy.org/), OpenCV( https://opencv.org/) and picamera (http://picamera.readthedocs.io/en/release-1.12/index.html#). Since these different programs can work together and have overlapping features, there seems to be multiple ways to analyze/process images. I assume the same is true for javascript but have not found the equivalent code that ranges from camera control to scientific processing. For example, I have not been able to find js code that controls camera functions such as picamera.
-What may be more important than js vs python is the ability to maintain image control/integrity. By image integrity I mean the ability to understand all the preprocessing steps (everything from white balance to image compression) that occur when an image is recorded. I have learned from adobe photoshop processing that post production flexibility works best when there is an unprocessed image. I suspect the same is true when analyzing images for scientific data.
-One of the goals of my GSOC project is to understand the benefits of using unprocessed images. Since “OpenCV uses numpy arrays as images and defaults to colors in planar BGR” (http://picamera.readthedocs.io/en/release-1.12/recipes2.html), collecting BGR seems like the best initial image collection approach. It’s also likely different formats may work best for different applications. For example, high resolution compressed jpg for microscope images, BGR for infragram (https://publiclab.org/wiki/raspberry-pi-infragram) and bayer images for measuring the color smears produced by spectrometers (https://publiclab.org/w/lego-spectrometer).
Reply to this comment...
Log in to comment
That's a great review, a few points thou: - The list of packages is missing Scikit-Image (http://scikit-image.org/) which is great for a bunch of high complexity algorithms. - I think that each of these packages has capabilities where it is better than others, and the overlap between is not that large. - I agree that being able to control image parameters is of great importance, but it's not at all simple (or even possible in some cases).
Reply to this comment...
Log in to comment
@maggpi - perhaps start a wiki page with a list of image processing packages for later use? It would be worthwhile to have some information regarding which OS/distro can run them, what programing language are they written in etc.
Is this a question? Click here to post it to the Questions page.
Reply to this comment...
Log in to comment
Login to comment.