What I want to do or know
What does the vertical scale actually represent? It seems to be based on the sum of the RGB values. This means that the only way the plot exceeds ~33% is when two adjacent channels have values of 255 which means the source data is too bright.
Why not make the 100% level correspond to the maximum level at that color? Then a 650nm laser line with an RGB value of (254,0,0) could be represented at 100% instead of around 30%.
Perhaps there is a need for scaling options of the y axis. Perhaps it would be useful to turn off 2 or 1 color channels and scale the result appropriately. (I am not asking for this feature but only suggesting that it might be useful.)
Reply to this comment...
Log in to comment
The vertical scale is purely relative, representing the combined RGB responses where 100% represents the clipping level for any of the RGB channels. The camera RGB data comes from R,G,B filters which each have their own spectral response. Combined, they mimic the response of the eye to provide a 'broadband' spectral response (though un-calibrated for intensity). Generally, a broadband response is what is wanted. While just the red response could be used separately, the red (or green or blue) sensitivity curve has significantly greater change vs wavelength than the combined RGB value; and so would need to be calibrated unless only single wavelength relative measurements were wanted. The combined RGB curve is an easy first-order approximation for broadband sensitivity. Intensity calibration can improve the flatness; though SWB does not yet do that.
Reply to this comment...
Log in to comment
So, the values in the graph are computed as follows?
The values from the camera for each channel (R, G, B) range from 0 (no light) to 255 (maximum light). For graphing, these are scaled between 0 and 100 (e.g., by dividing by 2.55) and a graph of each channel can be seen under "More Tools/Toggle RGB." To combine these three channels into a single value for each wavelength (or wavelength bin), the three percentages (the R, G, and B for each wavelength) are added and divided by 3 (or the camera values are added and divided by 7.65).
So values in the combined graph can exceed 33 without any of the camera values being close to 255. For example, camera values of 160, 140, and 20 will be graphed at 41.8 (320/7.65).
If a laser source with a value of 254, 0, 0 is graphed at 100, then an equally bright line at 590 nm (where the red and green channels overlap and the values could be 180, 180, 0) might have to be graphed at a value greater than 100.
Is this a question? Click here to post it to the Questions page.
For specifically what's done in the code, see these lines:
Here, the image data is stored (in 0-255 format) in JSON for storage in the database:
https://github.com/publiclab/spectral-workbench.js/blob/master/src/SpectralWorkbench.Spectrum.js#L557-L563
Then here, the stored data is assembled into graph-ready data objects:
https://github.com/publiclab/spectral-workbench.js/blob/7b52a773650951ab3a2b54c6ad60f8f117a6d72d/src/SpectralWorkbench.Spectrum.js#L77-L80
Reply to this comment...
Log in to comment