Question: How can a spectrometer's wavelength resolution be measured?

warren is asking a question about spectrometer
Follow this topic

by warren | August 22, 2016 21:11 | #13385


What I want to do or know

We're looking to post specs (har har) for the Desktop Spectrometry Starter Kit, and I want to post that (to my best estimate), that if well constructed, the DSSK alone can get approximately 2 nanometer resolution -- meaning that you can distinguish two peaks which are 2 nanometers apart.

See https://publiclab.org/notes/cfastie/10-06-2015/twin-peaks-tb-or-hg for some discussion of these two peaks!

How might we ask a bunch of people with properly-constructed spectrometers to report in their resolution in a standardized way?



1 Comments

Jeff, Those are really two different questions.

1) Measuring instrument resolution: A) Could use a single, narrow BW laser (if that BW is known to be significantly better than the resolution of the spectrometer) and then calculate the FWHM of the laser peak as detected by the spectrometer. B) Could use two (2) narrow BW laser sources, but also of known wavelength (which are a few nm apart) and then look at the ability to resolve the two peaks. B would be much harder to do. Caveat: Even with A, it is necessary to have some clear source of information about the actual BW of the signal. While lasers generally are very narrow band, some may be noisy and the time integration of the spectrometer could 'blur' the signal -- an issue if the signal BW were close to the spectrometer BW. If the laser's BW were an order of magnitude better than the spectrometer, then the laser's spec is less important -- it's just that you need to know something significant of it's BW.

[ Specs of laser point BW are very hard to find, but one reference claims the typical laser pointer at 630nm (red) has a spectral linewidth of 100MHz (i.e. the 'noise bandwidth') which can be converted to ~0.0001nm. While there is likely to be significant variation between laser devices, relative to a PLab spectrometer, a laser pointer would appear to be a near ideal source. LEDs, of course, will not work because their BWs can be 10-20nm. ]

[ Note: When the FWHM bandwidth value is close to the root data resolution (eg. the laser spike is 2-3 "pixels wide") then the process of determining the BW is not as simple as just "clicking on the FWHM button" to get a number. This is because the calculated resolution in such a case will have a large uncertainty. Simple averaging or smoothing of a spectral plot will also be inaccurate. The best method is filtering a time-series to get a probability curve which then gives a nominal value +/- uncertainty/ ]

[ Note: It has also been suggested that a 3-point peak (meaning a peak response which is defined by 3 data points which "look" somewhat symmetrical) is equivalent to a Gaussian response curve defined by a much higher data resolution (eg. 10 or 20 data points). This is simply NOT true. A set of just 3 points, to define a peak, is simply a "lucky" capture of 3 symmetrically-placed data points which contain significant uncertainty. To then extend that thinking and suggest that the FWHM can be extrapolated from two linear "lines" connecting the "left" and "right" points to the "mid-point" is also FALSE. Finally, suggesting that a calculated extraction of FWHM resolution of eg. 1.264578 nm expresses the ACTUAL resolution is also blatantly FALSE because such "calculated" reporting of the value deliberately suggests the additional digits have meaning -- they do NOT. This is because (again using the 3-data-point example) of the uncertainty. The correct value for resolution must be calculated from multiple measurements; performed using techniques which will extract a value for the uncertainty. If the uncertainty were say 0.5nm, then it would not be possible to guarantee even (1.2 + 0.5 = 1.7nm) resolution, so a more conservative value must be assigned. Since the uncertainty, in the example, is 0.5nm, then the resolution might be assigned a value of 2nm; noting that this is just '2' and not '2.0' because the '.0' denotes an accuracy of +/- 0.1 which, again, would simply be false. Clearly, playing "loose with the numbers" is a quick path to destroying a measurement's credibility; and is simply just wrong. ]

[ Note: The simple formula for diffraction resolution (the monochrometer bandwidth limitation of a diffraction grating): R = wavelength / (illumination width at the grating) is best thought of as a lower limit because it does not take into consideration any of the other optical characteristics (limitations) of the system -- such as 1) slit distortion, 2) slit illumination, 3) detector (camera lens) nor 4) the digital sensor pixel resolution and image noise. So, if the grating resolution calculation suggests 2nm, the system (like the PLab spectrometer) will likely be worse. ]

2) Crowd measurement of resolution: This is, by definition, much less accurate and much less reliable -- and cannot be used to set a device spec - period. However, if the purpose were exclusively to have a simple means of checking if the device appeared to be "working properly"; that would be different. In that case, just use a CFL and look for the Green double-peak. Then provide 2 measurement limits: a) the FWHM of the entire double peak (as if it were one peak) and then b) a minimum spec on the "depth" of the "notch" between the two peaks as a ratio of either the lower of the two peaks or a ratio of the average of the two peaks. Caveat: Since CFLs can be different and the double-green peaks different, such "test limits" (NOT SPECS) would have to be set loose enough to account for CFL variation but tight enough to assure the user the device was built properly -- i.e. based on a pile of experimental data (like 10 users build 10 devices and then you measure them all.)

Reply to this comment...


Log in to comment