Public Lab Research note

GSoC Proposal SpectralWorkbench

by PascalW | March 11, 2014 19:50 11 Mar 19:50 | #10160 | #10160

GsoC Proposal

Name: Pascal Winkelmann

E-Mail: (anti spam)

Affiliation: LMU Munich (Chemistry and Biochemistry)

Location: Munich, Germany

Projects I want to work on: SpectralWorkbench

Why I'm interested

As a Chemist I deal with spectroscopy every day and know of its importance! Before I went to university I often experienced the lag of spectroscopy tools for amateurs, which was frustrating to say the least. I was working on my own spectrometers and spectrometer software when I came across this project a while back and I've stopped the development of my software for now since I think my time might be better spend if I contribute to the SpectralWorbench.

Commitment: I do not have any other major commitments for the coming summer and can devote my full attention to this great project.

Abstract: The SpectralWorkbench provides a simple to use capture and analysing tool for vis spectroscopy. But the Software hasn't reached the limits of what is possible in amateur spectroscopy. There are some basic features and many advanced features missing that could make the SpectralWorkbench a great choice over commercial software. Some of the features may be: - Auto calibration of spectra - Auto detection of sampling row - hdr support - DSLR raw support - proper processing of uploaded images - …


I have good experience with many programming languages, primarily Mathematica, Matlab, Java, JavaScript, perl, and bash. I'm currently learning Ruby on Rails while I work through the code of SW to get a good overview and found it to be very intuitive.

I've been working on my own spectrometer software a while back and have thus some good understanding of the problems and solutions associated with the development of such software. There are some examples of possible improvements to SW attached to this proposal that should show that I have the necessary experience and often even code ready to improve on the current implementation of SW. I'm pretty flexible with the topic and since I have code developed for many of the topics I could get things done quickly. I would appreciate you input on the importance of all the projects.

I would consider myself an expert in image processing which is a major advantage for the work on this project.

I have worked through the code on the repository for the past weeks so I'm already pretty familiar with the current status and ready to start programming.

Teamwork: I have good experience with programming in small groups of about 3 people and the github work flow. I'm comfortable with reading and reviewing other people's code.


This greatly depends on the project you want me to work on. As said, I'm very flexible and would like to work on the topics that are considered to be most important.

Particular Projects I would like to work on

Since I've been working on my own spectrometer software I have plenty of code to address some of the open problems of SW and I would like to present some of them in the following part.

HDR Spectrum from Multiple Exposures:

The Problem: The dynamic range of consumer cameras is fairly limited and often not sufficient to capture emission spectra without considerable loss in detail. Very strong features get clipped while weak features aren't integrated long enough to be detected.

What I want to do: To overcome this issue I want to implement an algorithm to generate a hdr spectrum from multiple single exposures.

My attempt and results: A set of images at different, known exposures form the basis to the calculation of a hdr image. I used the neon spectrum as an example where the use of a high dynamic range approach can greatly improve the quality of the final spectrum.


I stated with recording a set of Neon spectra at different exposures. Five spectra are more then enough for the purpose and 3 would have probably been OK as well. It is easy to see, that the red and yellow lines are properly exposed in the fist images while they are blown out in the images at the bottom. There is no single image where all parts of the spectrum are properly exposed.

Constructing a hdr image from a set of images is equivalent to recovering the real radiance at each pixel of the sensor plane. This information can not be recovered from a single image since the response of the ccd sensor isn't linear over the available dynamic range. This behaviour is not totally due to the functionality of the sensor but also due to the post processing of the captured raw data inside the camera.

The non-linear response function of the sensor can be recovered from a set of aligned images which were taken at different, known exposure. With this function the radiance map and thus the hdr image can be recovered as well.

The following two images show the resulting hdr spectrum (bottom, logarithmically tone mapped) compared to the middle exposure of the captured set (top).

img2_comp.jpg hdrGraph2.jpg

Questions and next steps: Jeff and Dave did some tests with photoprinted stepped slits in a previous research note. ( Since photo printing also allows to vary print transparency, a slit could be designed which eliminates the need to take several exposures separately.

This approach can not only provide spectra with a much higher dynamic range but also greatly reduce the general problems with overexposure that many people seem to have.

Processing of uploaded Images

What I want to do: I noticed that uploaded images that are not captured with the interface of SW are just get averaged to get the corresponding spectrum. This often leads to worse spectra then necessary, especially because many people use curved cd gratings. I would like to introduce some post processing for the uploaded images to improve the quality of the extracted spectra.

My attempt and results: To illustrate the idea I put a little demo together: The following images show the comparison between a simple averaged spectrum and the spectrum of the post processed image. The implemented algorithm is simple, fast and based on autocorrelation. A serious implementation could improve the quality of the shown demo!



(Input Spectrum blue, Output Spectrum red)

Questions and next steps: There are many more improvements that can be made like croping.

Auto Detection of Sample Row

While studying the code I came across the unimplemented function to detect the sample row automatically and I played around with it. Here is the result:


The algorithm tries to find the best sample row based on local contrast and vertical frequency content in every sample line. The algorithm resembles sharpness detection methods used by digital cameras.

Auto calibration

Many, if not the majority of the spectra archived on the PublicLab webpage aren't properly calibrated. This is a big problem since uncalibrated spectra can't be compared reliably.

What I want to do: I've worked on project where feature detection and pattern matching played a important role. The development of appropriate calibration functions shouldn't be to difficult considering the one dimensional nature of the problem. I don't have examples for this topic yet but might get around to add some here in the future.

Most common calibration spectra could be included in this method. (CFL, Neon, Laserlines,..)


Hi, Pascal - this seems like a great project. Please check out some of the thoughts on digitization of images from my comment on Sreyanth's proposal; whether sample row selection is achieved through image processing or by plotting a cross section line through the image, the method of getting from the raw image to the processed image should be stored and reproducible; we've been calling these "recipes".

Sreyanth is worried that there is too much overlap in proposals, and I'm going to encourage both of you to widen your scope to make your proposals unique but helpful counterparts.

There's plenty of work to be done in SWB! My initial thought is that your interest is in exposure, clipping, and sample row selection, whereas Sreyanth is interested in calibration. Perhaps if each of you affords the other space and coordinates where your code "touches", this can work... if you each take on another sub-project as well, we can get an awful lot done this summer!

Reply to this comment...

Pascal, These are some really good ideas you are proposing. I am very interested in multiple, bracketed exposures to get more dynamic range. This would lend itself to my workflow which involves taking still images with a Canon Powershot and uploading these later to SWB. It would be easy to instruct CHDK to take 3 or 5 bracketed photos of each spectrum. If SWB allowed dragging in 5 bracketed photos of one spectrum and did the analysis to make one spectral graph, it would add lots of power to my workflow, and produce much higher quality spectra.

I don't know how this could be applied to a more typical SWB user sending a video feed to SWB for capture. Unless the browser can control the web cam, the user would have to vary the exposure manually and the browser would have to capture frames of varying brightness. Can the user's computer adjust the exposure of the webcam? Can this be done remotely through the browser? I don't know enough about the system to understand how this would work. If it can be automated it could result in a fantastic improvement of spectral quality for most contributions to SWB.


Is this a question? Click here to post it to the Questions page.

Reply to this comment...

Agreed, there is more than enough work here for two highly motivated contributors.

On your HDR example above, two observations: I think there is some conceptual error behind the plots. Notice the relative magnitude between the peaks in the top plot. Then, notice that ratio has actually decreased in the 'hdr' plot and that the magnitude of those same peaks relative to the (now amplified) noise is similar. HDR should not produce a lower SNR. If the 'grass' (noise) comes up, then, relatively, so should the magnitude of the higher SNR peaks.

You referenced setting "known exposure" levels though I'd not been aware the PLab webcam provided an interface to set the exposure. That would be helpful if it could, especially with good resolution. Presently, it is my understanding that the webcam has automatic exposure but that exposure is always operating at max sensitivity because the total light from just the spectral band in an otherwise black background is insufficient to cause a gain change. This is also why the background noise is high. With a lot of these PLab webcam spectrometer kits out there, there is a need to handle the images they produce. (See for an example.)

So, yes, I'd agree that several improvements are needed: 1) compensate for curved spectral bands, then, 2) HDR from full-image data to recover a higher SNR signal from clipped data and, then, 3) auto-cal from a CFL capture.

Cheers, Dave

Reply to this comment...


I was pretty busy the last few days and have to admit I couldn't follow the discussion on Sreyanth's proposal to closely. But I have a long train ride planned for Sunday and will get to it.

Í totally agree with you, it doesn't make much sense to have considerable overlap between the projects and it's certainly not my intention to take away from someone. I can't really be concrete right here before I've read through the discussion on Sreyanth's proposal but I'll will try to update my proposal and make it more compatible with the stuff Sreyanth wants to do.


“If SWB allowed dragging in 5 bracketed photos of one spectrum and did the analysis to make one spectral graph, it would add lots of power to my workflow, and produce much higher quality spectra.” - Chris

I couldn't have said it better my friend. That's way I would like to work on this project!

Some webcams allow for exposure adjustments from the computer. I can't speak for the webcam used in the Public Lab Spectrometers since I don't own one but it doesn't really matter anyway. There is a much more promising approach to take multiple exposures with the use of a “stepped” slit” (


You are a good observer Dave. The reason why the SNR in the processed spectrum seems higher is because it's not actually the hdr spectrum but a tone mapped version. The intensity scale should be logarithmic! I've been sloppy here, sorry. I will improve this section when I'm back home on Monday.

We only need to know the relative exposure of all the images we want to merge to an hdr radiance map. With a implementation based on a “stepped slit” those relative exposures are known. Thanks for pointing me to the research note - another thing for my train ride home ;)

Thanks again for all the feedback,


Reply to this comment...

Pascal, Thanks. Hope my posted note on HDR2 is readable.

Just a thought on the 'stepped-slit'. I constructed a number of them with the idea of forming recognizable 'separate' spectral bands which could have "known" relative intensity differences -- I used 3dB and 6dB. While I did find some gradation in intensity, it was by no means definable steps. The results I've seen posted so far suggest this is not an easy thing to construct. However, those experiments are what lead to just using the inherent intensity variation across the imaged band and measure that curve since there's nothing magical about steps other than convenience. I'm now more convinced that more accurate response curves are possible w/o modifying the hardware. I mentioned to Jeff that if he'd select a (full image) spectra with less than optimal, curved bands, I'd tweak my algorithm and post the results. I think such an algorithm is within reach and could be made rugged enough for general automated use.

Cheers, Dave

Reply to this comment...

Would it work to have a constant width slit with an integrated neutral density gradient from 100% transmission at one end to something less than that at the other end? Varying slit width is the wrong way to vary intensity because it changes resolution. If the slits printed on clear film work, it should be possible to print a transmission gradient along the slit. The system then has to be set up so that the camera captures an image of the entire length of the slit. Optically it might work better to have a separate transmission gradient filter near the grating, but I don't know which position would work better (e.g., which side of the grating). It is not going to help resolution to have the light passing through all those extra films, but it might work. It seems wrong to me to make the entrance slit out of anything but two pieces of metal with nothing but air between them, but I guess I am just old fashioned.

With the desktop spectrometer, I think the user presses a key or clicks to capture a spectrum from the video stream. If the webcam can be controlled from the PC, the user could then press a key to change exposure and capture another image, then repeat for a third bracketed image. So it would not be fully automated, but still easy for the user.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

Yes, resolution is affected by slit width; though much less than the intensity change which is related to the square area. I'd had some success in printing slit-image films and found small stepped-slits easier than graded-intensity slits so I opted for allowing a small effect on resolution. The slits were narrow - 8 to 20 mils, 3dB steps and I was including the gradient within the same film. (Yes, I prefer a couple of razor blades too...) Unfortunately, the camera images are much less well defined so I could not obtain a clear advantage. Then, when I measured the edge attenuation of just a regular slit and found a reasonably smooth curve, the option to use the existing hardware appeared more practical. There's now a lot of that hardware out there.....

Controlling the exposure is a difficulty -- I seem to recall it's all internal to the camera and no accessible. I was not able to find a way to do it with the little app that came with the camera. The camera has an active AGC which runs at full gain because most of the field is dark. So, high noise and sensitivity but at least the gain is constant.

There's been some chatter that camera's are non-linear but it's the post-processing algorithms that are that way. The sensors are linear and simple tests I've done suggest that the webcam, in it's present default mode, is relatively linear within the intensity range it is being used. My point is that we could go a long way in improving the basic capture process by 1) grab the full image, 2) auto-detect the full spectral band, 3) auto correct for band curvature, 4) run HDR on the band (where some clipping is good) and lastly, 5) auto-cal CFL spectra.


Reply to this comment...

Couple other notes - curvature correction would be great and pretty consistent for still captures, but would it scale to live-capture... and would you do it in JavaScript, so our offline users could make use of it? Just curious.

Dave - linearity explored a bit:

Personally i feel the ND-filtered variable slit is a strong solution (perhaps with a diffuser in front of the spec to even out all the light?) and I worry that auto-exposure compensation will make any attempt to do separate captures hard to control. But I could be totally wrong, it's just a gut feeling.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

Jeff, re: "...scale to live capture...". I'll have to guess that you mean observing spectral responses over time with the idea of generating a 'waterfall' plot. Assuming that to do enough analysis on a spectral band to get useful data (as opposed to just looking at a single line of pixels and hoping 1) it isn't saturated and 2) that there are no other anomalies) then yes -- though there would likely be a tradeoff -- slower but more accurate data vs faster but questionable data. Bad data is, well, bad data so let the data points be at a lower rate if necessary.

In simple devices (i.e. PLab webcam designs) accurate exposure control is questionable at this point -- and the original webcam just runs 'open' at max gain so at least it's noisy but 'stable'.

Yes, I think we should, at least for now, just assume the image data is linear with intensity.

I found it difficult to obtain enough image clarity of stepped or multiple slits within the confines of the optics-less PLab system. That's when I discovered that just having an attenuation range over the spectral band is easier to extract data than attempting to create artificial steps of "known" intensity which are accurate enough to rely on the designed attenuation values. (In fact, I failed to find a way to actually create such clearly defined steps in a spectral image (so as to be recognizable within the image) and was left with only creating a 'range' of intensity values. I do not presently see how to create accurate 'steps' but it is much easier to just measure what you get and let the software calculate the result from that -- i.e., like with HDR2.

As for curvature in the spectral lines. I found it difficult to generate such artifacts though I've seen some posted. I used some posted images to generate artificial images to analyze and found I could program detection and correction from using well-defined peaks to "follow" within the full spectral band within the image. I'd suggest that this could be a "selectable feature" that might only be necessary under "poor conditions". Well build devices shouldn't have curved spectral bands I'd think.

Cheers, Dave

Reply to this comment...

scale to live capture

I meant, computationally... could you do it in real-time, even on a mobile device?

For curvature, I think this happens quite often with a wide slit and a large diffuse light source, on smartphone spectrometers. An example might be

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

Depends on what you want to monitor. If the whole spectrum is dynamically changing, then maybe not because the relationship between clipped peaks and linear peaks and background does not necessarily remain a set of constant ratios (like it is with a fixed spectral band that is just optically attenuated (variable across the band) in some static way. If you wanted only to track a specific portion (like a peak of interest) then maybe the algorithm could be simplified (after the initial calculations) and need only work on 1/10th of the data. That said, if fixed-gradient attenuation becomes manageable, then the amount of data to process could drop significantly which then improves processing bandwidth.

I'm tinkering again with slit attenuators for the smartphone spectrometer now that I have one. The smartphone camera (w/o SWB) appear able, at times, to actually focus on the slit -- which is what it should do. This is an important feature for the proposed gsoc work -- and it's especially important that the focus, whatever the setting, not change.

BTW: SWB does not work with an S4 active android because it defaults to the front camera instead of the back and there does not appear to be a switch in the app to switch cameras.

Also important: What does the current SWB-mobile do in terms of setting exposure? This could be a very big deal in terms of getting usable data from the image. Preferably the exposure would be automatically adjusted by the app, but triggered (like a reset) by the user once the setup is stable.

Yes, that's true about the curves with the smart-phone version and large-area light source. Actually, I think this is where a modification to the slit (shorter and variable-attenuation) might be very helpful. Curvature can be corrected, but collecting more of the same data just to fix the curve is not useful. Accepting curved lines while gathering attenuation data is useful.

Cheers, Dave

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

BTW: SWB does not work with an S4 active android because it defaults to the front camera instead of the back and there does not appear to be a switch in the app to switch cameras.

This is a Chrome problem; they've been dragging their feet on camera selection for many months. Firefox and Opera have camera selection interfaces though. We should add a message about it, but in the meantime if you could +1 this issue so it actually gets fixed in Chrome, that'd be great:

What does the current SWB-mobile do in terms of setting exposure?

There is no javascript interface for setting exposure; we work on the assumption that if there's relatively little light, it'll be at the max. If you think it'd help, we could have a blinky icon that encourages the user to hold it steady for a second before capturing?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

Ok, guess having the exposure default to max isn't too bad, but actually setting the exposure to max is better. I've seen evidence that the standard camera app adjusts the exposure when the spectrum intensity gets brighter. It's hard to see or know for sure from just looking, but for a spectrum app, this would be bad.

Worse though is the 'auto focus'. I've tried a few camera apps and searched, but no app allows manual focus. I see glimpses of the camera focusing on the slit, but it doesn't remain focused near there. (This is a common complaint about all android camera apps). If the focus cannot be controlled, then this is a major issue because the change in focus will cause drastic changes in the spectrum curve shape.

It is absolutely necessary to have at least a fixed exposure and fixed focus setting for the spectrum app otherwise, the results will be widely variant so near useless data.

Reply to this comment...

Hmmm... could we put some kind of other fixed shape or something in the field of view that the focus would lock to, and that we could meter exposure compensation with?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

(original comment seems gone...) ...I commented that I did try a pattern but that the auto-focus for the default camera appeared to focus momentarily and then drift off. I also commented that Google Chrome Beta is available and seems to run but that the camera selection for S4 android is still broken.

Edit: Yes, 'broken' only means that the code difference between Chrome and SWB should remain on the list and that using Chrome Beta did not, by itself, fix the issue. I'd thought it possible that the selection feature was there in the app an it was just that Chrome was ignoring the request.

Reply to this comment...

I think auto-focus varies from phone to phone... I haven't had an issue myself with that but it'd be good to know which phones suffer from rapid auto-focus problems.

Re: Chrome Beta, come now, Dave - cut me some slack! They took almost a year to actually add the feature and now it's only been confirmed 6 weeks ago on an obscure issue tracker -- and did it differently from all the other browsers. I've created an issue ( and we should be able to address this now, but "broken" is a little harsh, don't you think? We're doing our best! :-)

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

Yes, true. No slight intended. I edited my comment to note that 'broken' means only that the disconnect between the app and Chrome Beta is still there so should remain on the list. I'd wrongly guessed that Chrome was just ignoring the camera selection in the app and hoped that I could now use the mobile version. Hope the sharp gsoc guys will get a stand-alone app running.

Reply to this comment...

Huh, very strange that the comment is gone! No worries though... trying to make more time to code in the next week or so too. GSoC is energizing and exciting!

I have camera detection running in a test version of the site... haven't been able to get the camera to switch over, though :-/

Reply to this comment...

Jeff, I constructed a few more slotted filters which just bend a bit and fit in front of the smartphone slit. The camera focus difficulty limits the size of the slots because of blur. This then directly relates to the area of the light source -- which can illuminate multiple slots (i.e. of potentially varying attenuation). Since "wide" light sources are easier, the best option would be to have the camera pre-set to the slit's distance (or as close as possible) and leave it fixed -- since there would be no reason to want blur. Then, the slot dimensions could be further reduced which allows smaller-area light sources to illuminate multiple slots. I've still not found a regular camera app that provides manual control over focus to allow the experiment. I'm wondering if there is a technical reason why manual focus is not available in camera apps -- perhaps because mechanical constraints; but just guessing as I don't know how they accomplish focus. Seems odd though. -Dave

Reply to this comment...

Chrome camera switching partially working; need to debug on Chrome for Android though. Can you try it out?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

Jeff, it seems to me that real-time curvature correction shouldn't be so hard. Estimation of the observed curvature only needs to happen periodically at most (perhaps even once would be sufficient) and should be a pretty simple thing to fit. After that it's just an interpolation problem. You could even leverage WebGL to offload this to the GPU as GPUs are very good at this sort of thing. Really though, I suspect doing it on the CPU would be just fine (although it might chew batteries pretty badly).

Reply to this comment...

Hi, Pascal - i wanted to know if it's OK for me to use your lead image as the SpectralWB twitter feed profile background? If you have a version with attribution in it, or a preferred attribution, i'm happy to use it too.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

Reply to this comment...

Login to comment.