Question: What mechanical specs can and should PLab spectrometers meet?

stoft is asking a question about spectrometry
Follow this topic

by stoft | September 15, 2016 19:58 | #13451


Mechanical design directly affects stability and, in the case of a spectrometer, stability is mandatory because the fundamental measurements are based on optics and the dimensions related to the wavelength of light.

Designing the materials and details of the platform and housing without first establishing the stress requirements is approaching the problems in reverse. Eg: If the product needs to withstand 25 lbs, then cardboard is not a consideration unless it has many layers. Defining the stresses means 1 - static weight and the direction(s) of force, 2) flexure, 3) vibration (and that coupling from outside to inside optics), 4) non-orthogonal force (eg. corner to corner), 5) assembly force (screw-tightening - i.e. acrylic likes to fracture, polycarbonate less so), etc....... Deciding these limits means defining the expected (and product-specified) environmental conditions.

None of this has to necessarily be elaborate; but you do need to think about and make decisions based on specs you need or want to achieve. Eg: 1) can't easily over-tighten screws and crack the housing, 2) can drop it on the floor from 3 feet and it will survive, still work and not be out of alignment, 3) can put 3 lbs of books in a backpack on top of it, 4) can pull on the unit untill the USB cable comes unplugged and the cable, camera and alignment will not suffer, etc...... There is also room for multiple product "levels" with increasing stability as there is an obvious cost difference between choosing wood vs granite as an optical platform.

Device Configuration Levels

So, consider just three (3) levels (mechanical stability only, not spectrometer design) as a start to the challenge:

[Note: The following detailed materials and specs are not a final definition but simply show a means for differentiating between levels. Attempting to include all material options here would be difficult.]

Level 0: (unacceptable) - Cost: Primarily the webcam - Materials: Paper, tape and velcro - Stability: Generally unstable, signals not repeatable - Environmental: Easily twisted, bent, misaligned, crushed, wet

Level 1: (minimal specs for education only) - Cost: Similar to Level 0 - Materials: Wood, glue, tape, paper, (no velcro) - Stability: Minimal, short-term if left untouched - Environmental: Resists twisting and misalignment, still crushable, wet

Level 2: (minimal ever-day use) - Cost: Added cost would be custom cut plastics / wood / etc - Materials: Wood, glue, plastic, hardware, paper(light shield only) - Stability: Moderate, multi-setup repeatability, vibration damped - Environmental: Resists handling stress, easily mountable, not easily crushable, USB cable restraint, backpack durable, splash resistant, droppable from 1 foot, crushproof to 3 lbs

Level 3: (field usable) - Cost: Potentially higher material and custom-cut costs - Materials: Plastics, glues, hardware, some wood/paper - Stability: Good long-term measurement repeatability, relatively immune to vibration and stress, no permanent alignment shift from induced stress - Environmental: Rugged enough for field use w/o damage, water resistant, droppable from 3 feet, crush-proof to 10 lbs

These are only example and specific numeric values for any of the mechanical stress factor limits should be set and tied to tests which can be easily performed, again, and again, and again, ....

A Note on Specs

Also, it is important to remember that with ANY spec, the limit point of failure MUST EXCEED (be better than) the published spec. [Normally, there are actually several spec levels per specification; 1) component spec (the tightest), 2) assembly build test spec (a little looser spec), 3) final test limit (looser still but better than published) and 4) published spec for the user (guaranteed that ALL units, in the hands of ANY user will perform to the published spec). Eg, Component supports 10 lbs at failure, assembly supports 9 lbs at failure, final device supports 7 lbs at failure, published spec is set at 5 lbs.

[DRAFT] Level-2 Tests

Because of the expanding interest in improving stability @warren suggested I propose some methods to observe spectrometer stability which might demonstrate the difference between prototype designs. This is a good idea because, so far, my wood-based prototype provides only one data point and science works best with more data.

1) The simplest reference point is, again, achieved using a CFL as a wavelength-calibrated reference. Assuming two designs are being compared (the original and an upgrade) the reference spectra should be taken using the same criteria.

[Note, there will thus be one specific wavelength-calibrated CFL reference spectra per device which will be used for calibrating the test spectra and for comparison of all tests performed on that device.]

[Note: It is better to control the signal level in the camera by adjusting the distance to the source, rather than attempting to use an attenuator.]

2) All other measurements would also be performed with the same CFL, placed at the same distance to the device to avoid most variations of the source. It might be easiest to have all devices perform each of the same tests at the same time to keep the configurations consistent.

3) The data for each subsequent test is thus a CFL spectra, taken under the same uniform conditions for each device as is reasonably possible. The spectra for each test condition will have the same Reference CFL wavelength calibration (for that device) as described in #1 above.

4) The spectra of interest, for extracting measurement values, is only the "combined" RGB spectra. However, another set of values could also be of interest; measuring the Blue trace for the blue peak, the Green trace for the green peak and the Red trace for the red peak. Having both the raw and calibrated spectra for each test, the data is all there for later analysis.

5) Each acquired spectra (raw and calibrated) should be kept for additional observations as other tests are devised. However, the data of interest, to be extracted from each spectra, might initially be just a few values -- perhaps just extract measurements on 3 peaks: a) peak nm, b) peak intensity (not normalized) and c) FWHM in nm (width of a peak in nm at 50% of peak maximum) and log this data to a spreadsheet as the tests are performed.

6) Mechanical changes can affect several spectral characteristics and separating specific cause and effect is not likely to be obvious or easy. However, some correlations seem likely. a) Twist - can causing misalignment which affects both intensity and peak nm, b) Dropping - could cause a peak nm offset by displacement of camera relative to the slit, c) Cable tug - could affect camera position affecting spectral band position in the image (so intensity) as well as peak nm, etc.

7) Some form of measurement repeatability could be performed as well. Even just performing the same, simple, "ideal" measurement setup several times might show sensitivity to a repeat configuration. Perhaps the same configuration and measurement performed by different users would be useful.

8) A bit more subjective is the observation of the ability to resolve the double-green peak. Clearly some device builds have produced spectra where that peak is singular and rather "wide" but at the other end is the "measure" of the details visible in the green peak. Some quantitative measures could be calculated at a later time; for now, just a simple visual might prove interesting.

[DRAFT] Some Possible Level-2 Mechanical Tests

0) Reference: Device on bench, weight with small book, adjust only the CFL (position and distance) for best spectra, capture and calibrate wavelength. Use this Ref Cal for all subsequent tests.

1) Hand-Position: Fiddle with the device then set on bench (no weight), align toward CFL, let go and take spectra.

2) Hand-Held: Hold in the hand and orient toward CFL, take spectra. Might try this several times and with with several users w/o specific instruction so as to observer user variations.

3) Twist: Place on bench, align to CFL, hold camera end, 'twist' slit end of housing, take spectra. Repeat for opposite twist.

4) Drop: Drop the device onto the bench, from N-feet, several times and then position on bench, align to CFL, add book weight and take spectra. [ The height of N-Feet is also a survivibility test. One device of each type, which is eligible for destruction, could be dropped (and then measured), from progressively greater height until it fails. ] [This is usually one of the more exciting tests ;-)]

5) Cable: Position on table, align, hold camera end, take spectra (as reference for this specific test), pull on cable, re-take spectra as comparison.

6) Vibration: Position on table, align to CFL, take spectra as reference for this test, place cell phone (on vibrate) on top of device, get ready to take spectra, call the phone and take the spectra when the phone rings. (Could also just use a mechanical alarm clock or some small mechanical toy....)

7) Weight: First test the device approximate load bearing capacity using a stack of similar-weight books -- only up to the number which the device can still easily hold. Then take 3 spectra; eg. 1 book, 5 books, 10 books.


Hi, Dave - I like this -- maybe "Materials" could become "Suggested materials" since people may get creative there -- from, say, cast resins, to who knows, cast concrete?

@abdul - I like this sort of "levels" approach -- perhaps the current DSSK is aimed at Dave's level 2, and clear documentation could swap glue for velcro (as in your build) for most people? I think if the slit is separated from the exterior body, the paper is then only providing light blockage, which is nice. And upgrades like something based on the rigid housing prototype could be aiming for level 3, perhaps.

If you included recommendations for increasing rigidity in the wood-less design, it could be shipped aimed at level 1, but easily upgraded to level 2, maybe.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

I added a comment; I think that at this abstract level, the assumption of conceptual mode is fairly implicit. The entire post is focused on proposing a way of approaching the issue; not the specific results for the product.

Yes, that is one of the values of a modular design. If the insides (the actual spectrometer design itself (slit,DVD,camera) is robust and well designed, the "housings" can be swapped easily.

Reply to this comment...

Having built the DSSK 3 and experiencing utter frustration with it the minimum for me is the wood platform as per the upgrades with additional modifications. I am now on build 3 awaiting arrival of a different camera with known bandwidth. I have discarded the oragami box, added an additional piece at the rear to form a box shape 1.5 x 2 used a piece of heavy (100 lb) paper as the light tight enclosure with black electrical tape, extended the front of base by about 2 in to serve as a platform for cuvette holder. SW should be capable of better use of the current camera's resolution. ie 1024 samples viz 640 or better. Don't have pix of the device.


Reply to this comment...

@stef, Thanks for your observations; glad to see the creative spirit alive and well. Here's a link to a note (which includes two other links) about prototyping which might have something of interest to help your efforts:

Reply to this comment...

I'm personally reticent to begin prototyping hardware-- or even sketching it-- before defining the goals and test environment. For me, the existence of a clear hardware prototype too early in a process can obscure my focus. I commend @stoft for taking a step back to seek that focus, even this late in the project. For me, I like to see five steps happen before the brainstorming and prototyping stages:

  • Understand existing approaches to a task
  • Observe how people use existing equipment to accomplish the task
  • Identify the Point of View of different people who desire different approaches to a task
  • Prioritize the Point of View the people whose needs I identify with or who I'm being asked to identify with
  • Establish task goals and test requirements for equipment ot accomplish those tasks
  • Brainstorm about meeting requirements
  • Prototype to meet requirements
  • Test
  • Repeat

These get us to the test requirements that create the kind of bounded space that gives brainstorming some focus, and for me, its those boundaries that make it fun. Most of the steps in the design process take me up to this first question below, and I follow along on trying to identify specs and tests from there:

  • What action should the device be capable of performing? (target data outcome)
  • What is the minimum features required to complete the action?
  • What are the desired features to complete the action smoothly?

The question of minimum specification can be approached theoretically or normatively. from a design standpoint, a normative and historical approach is how I've worked- having never been on a theory-driven team. My normative approach is:

  • What hardware is normally used? (standard specification)
  • What hardware has historically been used? (the first hardware can be a proxy for the minimum spec)
  • What comparisons to existing hardware can be done to verify prototyped ideas?
  • What correspondence is desired?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

@mathew, good thoughts. I'm technical but from experience I see the need to include the market, user, next-bench, test, manufacturability, environmental, etc. factors in addition to the technical specifications, measurement capability and that relationship to what may already exist. My term is "thought experiments". It is often much more effective and efficient to work on hypotheticals with mostly questions to quickly weed-out the irrelevant and totally impractical and it has the advantage of including all those ares you listed simultaneously. You're right, non of these 'components' exists (shouldn't) in isolation but that also means they can, and should, frequently be viewed in parallel. The "trick" is to balance 1) not setting elements in concrete too early, with 2) making good choices soon enough to prevent analysis paralysis.

Reply to this comment...

I think that the foldable smartphone spectrometer's huge sales provides evidence that there is robust demand for level 0 spectrometers and that they can and do serve educational purposes.

I think what we need to start coming to terms with is that different spectrometers will have different demographics it targets.

We can continue to sell the DSK as a standard neutral option since people already like it as it already stands.

We can offer a slightly cheaper slightly less reliable (or equally reliable -- unknown until testing) one without wood that saves people huge amounts of money on customs and shipping right away. By huge amounts of money I mean between 100% and 200% of what they pay for the spectrometer .

We can offer a third option that endeavors to achieve reproducible results adequate for scientific usages.

What I want everyone to remember during these discussions is that we already have products in the store and they are already extremely, extremely popular. We sold over 50 desktop spectrometers and around 125 foldable spectrometers last week alone.

The third option does not need to have retail cost as such a major consideration, but on the others retail cost is the paramount consideration, since they do not purport (as of right now) to produce any scientifically reproducible results.

For the third option, being defined as a spectrometer that can produce reproducible results, we can focus on issues like rigidity, light sources, and can decide between camera models. But the other options already exist and they're already in high demand.

Reply to this comment...

I also wanted to link back to this idea of monitoring the green peaks over a period of time:

As to how to measure this, I'd suggest a real-time display of say a green peak from a stable lamp where the display is a live 3-5 point running average of that peak's value, plotted not to show the absolute value, but the difference from the 'norm' so as to observe drift. The analogy is an oscilloscope. Then, handle the device and watch for correlations between the mechanical and the signal change -- which should be rock solid relative to the mechanical mounting of the optical components and the USB cable.

Reply to this comment...

@abdul, interesting observations .... so here's a starter list. [0] Folding Vers (Cute, zero-cost, spectrum concept), [1] Paper Vers (Functional, low-cost, spectral data but unstable), [2] Wood Vers (Functional, stable, middle-school science data, more repeatable). Then, [3] Optics-stable Vers (Better stability, first device with some specs), [4] Tools Vers (Stable, collectable data, integral cuvette, at least one environmental app), [5] Measurement Vers (FIrst real spectrometer with specs and applications), [6] 'High-End' Vers (Monochrome sensor, specs, apps, data analysis).

It is interesting that sales number are linked to customer perception. Do you have significant actual customer feedback statistics, or is it just high sales equals customer is pleased? It is just very easy to assume cause-effect associations which "seem right" but may not have hard data to back them up. ( See, science seems to sneak in just about everywhere ...;-) )

Are you suggesting that adding a bit of wood and removing some paper and velcro is going to make the customs or shipping costs prohibitive for overseas sales?

If a design using any wood becomes cost-prohibitive for a market, then that suggests a clear segmentation of that market. If so, then improvement ideas for that lower-cost market must have paper and glue as a limit -- a separate challenge not yet tackled. It might be possible to exchange higher labor 'cost' at the buyer's end for lower material / shipping / customs cost.

Does a product marketing plan for a suite of PLab spectrometer products exist? Maybe in draft form? With defined market segments, application, performance, materials, specs, costs, etc.....?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

Interesting thought. I'll suggest using blue and red LEDs which then allows monitoring apparent wavelength shifts as well (when slit and camera or if grating and camera have relative position shift). There can be 'intensity' shifts (at a pixel position) when the entire device moves relative to the light source so, while testing, the device / source position cannot change if you want to measure the device's weakness.

Reply to this comment...

Excellent stuff there @stoft youre definitely picking up what Im throwing down.

So, regarding customer feedback unfortunately weve been prohibited from getting that information by an evaluation, something that as you can imagine causes a loooot of widespread issues when making informed decisions.

But! Beyond that, I'll explain the customs thing.

Items shipped in envelopes as "documents" do not get charged customs fees anywhere on Earth. Your first thought is probably that a spectrometer is not a document, but postal services use a definition of document based only on rigidity. So, removing the wood from the current DSK leaves us with nothing large enough and rigid enough to prevent the item being shipped as a document, thus reducing shipping fees to letter rates (1/10th of parcel rates) and exempting them from all customs charges.

Now, keep in mind that many people, many many people, currently pay the 22.50 for international shipping plus their local customs fees (up to 69.50 additional depending on their country.

This is why I am recommending segmentation hust like youve described. A rigid spectrometer, whether it uses wood or metal or whatever else, necessitates those higher fees. A non rigid one, while not really accurate, does not.

I would absolutely love to have better customer data beyond simply the once a week complaint about customs fees I get, and hope to continue lobbying for permission to get that. But in the meantime I think segmenting the products is an intuitive way to go.

Reply to this comment...

@abdul, ok so we've just differentiated the [1] and [2] kits; the option [1] should ship thin, paper only with no 'solid' materials other than the 'thin' camera. How about glue? Maybe a thin, heat-sealed plastic packet? I'm thinking that there are structural ways to increase rigidity and stability (better than velcro but not as good as wood) if construction paper board can be glued by the buyer. As I wrote before, add more stability as a feature by exchanging build time for avoiding velcro and wood.

The idea here is that 1) you don't have any real feedback on how buyers react to the kits, 2) I did not hear that there were large numbers of repeat buyers, so I'll assume most are first-time buyers and maybe mostly just the curious. Since the potential market is huge (world-wide) you could keep selling kits, which buyers totally dislike, and never know -- because kits still sell. So, the only other piece you have is the knowledge that the kits do not perform as well as what an inexperienced user likely expects. Yes, this too is an assumption, but it is a much 'safer' assumption in terms of 'product quality' than assuming buyers are thrilled to bits. In addition, if you assume that 50% of buyers are happy and send links to their friends, it is also still best to show the product is improving -- than to leave them guessing if the design is 'getting old'.

So, I'm just suggesting to use the lesson learned about stability (and the wood proto) to evolve the Option [1] as well as producing an new Option [2]. [ Aside: I'm also suggesting that things like velcro are just inherantly not a good idea for an optical device and anyone who buys the [1] kit, and knows something about spectrometers, will definitely spot this design flaw. So, I'm suggesting you 'ditch' the velcro asap, in favor of alternate paper-based improvements. ]

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

Hi Dave - I wanted to try out a set of tests when "reproducing and testing" the stability upgrade at the #documentation-days events on the 12th and 19th; would you be able to put up even a preliminary draft of a set of "level 2" tests we could try before and after upgrading a spectrometer, perhaps to the end of your stability upgrade post?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

@warren, I didn't see you comment about a draft of test types for replication experiments so I just posted them at the end of this published note on mechanical stability. Maybe not the ideal place, but applicable and easily viewable. Hope it's a useful start.

Reply to this comment...

OK -- a few thoughts -- I'm taking multiple spectra at each step, and tagging all with "stability-upgrade":

I'm taking two pre-smush with a stability-upgraded spectrometer without moving it in the "test stand" and two after removing it and replacing it in the test stand. We'll do the same with an un-reinforced spectrometer.

Where and how should I post the results of these tests? I guess here... or on the stability upgrade post itself?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...

I'd guess that it's just the collection and comparison of key measurements extracted from the spectra which may result in 'sensitivity' estimates. So, adding a summary of observations to the end of this post seems appropriate. Date which shows estimates of peak wavelength stability after cal would make for good comparison; eg: Post-Cal wavelength repeatability: 546+/-2nm (normal handling), 546+/-5nm (1 meter drop stress test).

Reply to this comment...

Just wanted to ping in here we did some stress tests last week based on these proposed tests, and they're quite dramatic -- a 4-5nm shift (these were pretty aggressive stress tests -- twisting, dropping from 3 feet) -- but take a look:



Reply to this comment...

Log in to comment