Dear community members,
I am currently involved in the development of a hydroponics prototype kit for assessing plant growth in the Netherlands. It is an open science initiative in collaboration with ESA aimed to measure optimum environmental conditions for plants. Hereby we use a Raspberry Pi to measure all kinds of environmental data with the kit. I have a couple of challenges:
- I'm new to the field of hyperspectral imaging and found the InfraGram
initiative very exciting. I use a Pi Noir camera with a blue filter,
however I haven't managed to get the desired images that allow for
assessing photosynthesis in plants. Is there anyone working with the Rpi
Noir camera? Preferably I would like to read into some documentation if
available or a contact person to discuss further details. In our case
we have a controlled environment and can construct a specific background
if needed for optimum pictures. Below a picture taken with the RPi Noir camera (taken outside of the kit):

- We would like to have water quality testing such as Electrical Conductivity, PH mainly. Most Raspberry Pi interface-capable sensors are very expensive, and we are looking for cost-effective solutions.
Thanks in advance!
Kind regards,
Sidney (on behalf of AstroPlant)
Using two cameras solves one of the two problems I described (mixed NIR and VIS). That problem can also be solved by measuring or making assumptions about the proportion of NIR and VIS in the mixed channel(s). This requires some data about the sensitivity of individual channels to particular wavelength bands. Some cameras have been characterized in this way. The sensor in the RPi camera has been characterized in the visible part of the spectrum, but I have not seen results for the NIR portion. A haphazard subset of consumer cameras has been characterized in both the VIS and NIR. You could apply those results to other cameras and this might get you as close as you need to be.
The other problem, that cameras are not as sensitive to NIR as to VIS, can be solved using the same data that solves the first problem. You need to know how much to inflate the NIR values to compensate for the sensor's lower sensitivity in that range.
Ned Horning has been working on a workaround for both of these problems that involves placing targets of known spectral reflectance in the bands of interest in each photo. You then figure out how much the brightness of pixels in the targets differs from what they should be, and use that information to adjust all the other pixels in the photo. This is more straightforward if the photos are captured in raw format so you avoid the heavy processing (gamma correction, color balance, etc) that happens when the camera makes jpegs. I think this method can be derailed by the issue of mixed NIR and VIS in each channel because different parts of the photographed scene can have different proportions of NIR and VIS, so you never really know what that proportion is for any pixel. But maybe it works when applied only to the foliage parts of an image where the NIR:VIS ratio is more predictable.
Just wanted to say this looks really cool!
Reply to this comment...
Log in to comment