Intro to Activity Categories
In an effort to make our collective work more organized, inviting, ...
Public Lab is an open community which collaboratively develops accessible, open source, Do-It-Yourself technologies for investigating local environmental health and justice issues.
Public Lab chatroom
Reset your password
In an effort to make our collective work more organized, inviting, and replicable, we're encouraging people to categorize their Public Lab research activities to help flesh out these activity grids. Here we've listed seven different kinds of activities and included a description of what sort of work comprises a given category, and what sort of documentation should be included. Feel free to edit this page to expand and improve upon these category descriptions!
The categories include:
Test tool limits
Monitor your environment
“Build” activities involve constructing tools, and the documentation involved in a research note about a build activity should include a detailed set of instructions, and visual support to the extent possible. Build activities are often on-boarding activities, inviting newcomers into the community or inviting people into a new project area. Thus, it is particularly important in build activity documentation to explain steps in plain language. It can be useful to introduce terms pertinent to the project area, but be certain to do so with plenty of jargon-free explanation to teach these terms to newcomers.
“Verify” activities involve testing the tool to ensure that construction was successful. These activities are an essential component of the build process, and should be performed prior to using the device for other forms of testing. Verify activities should be fairly simple, take a short amount of time, and test whether or not the tool can perform a core function. Since these are the first tests following construction, newcomers will be performing verify activities, so please be sure to explain steps clearly and in plain language. Provide metrics for successful verification too, and if there are common mistakes, please provide steps to assess whether or not the builder has made a common mistake.
“Observations” are activities where you use the tool and notice something of interest. Observation activities should describe setup and environmental conditions, and detailed description (preferably step-by-step) of the process used that resulted in a given observation. Observations don’t necessarily require an experimental design, and are often motivated by exploration. These activities can lead to true experiments if enough observations are made under different conditions or with different processes. Observation activities are really stepping stones and a great way to connect with others in the community.
“Test tool function” activities are tests that are designed to discover the capabilities and limitations of the tool. These are usually performed under “ideal” conditions and include an experimental design sufficient to allow you to deduce tool or technique operational or data quality limits. Testing tool function activities should isolate one core function, and optimize technique or operational conditions. Examples of this category of activity include finding the detection limit of a given analyte, or determining a techniques best-case precision.
“Field test equipment” activities involve testing how real-world conditions impact tool performance. Field tests are usually conducted after tool functionality in ideal conditions has been assessed (i.e. after “test tool function” activities), and can range from simple observations to field studies with experimental designs sufficient to elucidate how different environmental parameters affect tool function. Field conditions should be thoroughly described, and if possible, field tests should be performed under a variety of conditions including the likely most difficult relevant field conditions. Field test activities should include functionality tests to be performed before, after, and perhaps during field deployment (depending on the tool). If possible, field tests should include basic best practices for technique in the field, and they should also include consideration and evaluation of tool or technique ease of installation, use, data retrieval, etc.
“Experiment” activities include explicit experimental designs and are typically structured to examine one variable or relationship at a time. Experiments include hypotheses, controlled aspects, and variables, and are the basis for empirical science. In addition to including an experimental design structure, all operating and environmental conditions should be thoroughly described. Any necessary materials should be listed, and detailed methods listed in a step-by-step fashion so that another person could replicate the experiment seamlessly. Results and discussion of the experiment are key components to build a shared understanding of significance or implications, and are important for engaging others in the research. As with other activities, methods should be written in plain language to encourage others to attempt replication. Technical language may be important, particularly when exploring nuance in an experiment’s discussion, but explaining technical terms in plain language prior to utilizing them in the discussion section is important in welcoming others into your research.
“Monitor your environment” activities involve getting outside and conducting an environmental assessment. These activities use techniques with known capabilities or “specs” (e.g. “testing tool function” activities have been conducted) and include explicit study designs. Monitoring studies should include consideration of a variety of relevant environmental parameters (e.g. wind speed and direction, watershed boundaries, etc). Monitoring activities usually investigate the nature of occurrence and spatial or temporal variation of a condition or analyte, and sometimes are specific to one location, whereas others can be applied more generally. All materials and methods used should be listed and explained in plain language, and design elements such as data quality indices (if relevant), should be described. Visual aids such as maps can be particularly useful in monitoring activity design and documentation. For completed monitoring activities, all environmental conditions should be described, and data quality indices should be reported along with any monitoring results. Some notes on designing an environmental monitoring study can be found here.