Activity Categories

by gretchengehrke with abdul , liz , warren | 02 Sep 22:41

A bunch of people here in our community are interested in making our collective work more organized, more inviting with explicit activities for people to engage, and more replicable in order to harness our community's size and diversity to produce solid work. @liz introduced the idea of activity grids, and here I want to introduce descriptions of those different categories of activities. You can tag your work with the appropriate category by using the tags listed under each description. I've made a wiki page with longer descriptions for each category -- please feel free to edit and enhance it!!

Here is a run-down of the categories:

• Build

• Verify

• Observations

• Test tool limits

• Field test

• Experiment

1. Build

“Build” activities are essentially detailed instructions to help you construct a tool and join this project field!

Tag: category:build

2. Verify

“Verify” activities involve testing the tool you have built to ensure you built it correctly. These activities include basic function tests to verify that the construction was successful, so in future work you can start using your tool for more exploration and experimentation.

Tag: category:verify

3. Observations

“Observations” are activities where you use the tool and notice something of interest. Observation activities included here describe the conditions and process the activity maker used for you to reproduce, but they do not include an explicit experimental design to test a given variable. These activities are meant to help ignite your own exploration and wonder!

Tag: category:observe

4. Test tool function

“Test tool function” activities are tests that are designed to discover the capabilities and limitations of the tool. These are usually performed under “ideal” conditions and include an experimental design sufficient to allow you to deduce tool or technique operational or data quality limits. Please join us in discerning the limits of functionality -- this is an essential step in developing a tool, and the activities need a lot of replication!

Tag: category:test-limits

5. Field test equipment

“Field test equipment” activities involve testing how real-world conditions impact tool performance, and usually involve getting outside and seeing how well a tool fares out in the environment. Field tests are usually conducted after tool functionality in ideal conditions has been assessed (i.e. after “test tool function” activities), and can range from simple observations to structured field studies. Try getting your hands dirty with these activities!

Tag: category:field-test

6. Experiment

“Experiment” activities include explicit experimental designs and are typically structured to examine one variable or relationship at a time. Experiments include hypotheses, controlled aspects, and variables, and are the basis for empirical science. Information is learned and confirmed by replication, so please try these activities!

Tag: category:experiment

“Monitor your environment” activities involve getting outside and conducting an environmental assessment. These activities use techniques with known capabilities (“specs”) and include explicit study designs. Sometimes they are specific to one location, and sometimes can be applied more generally, and they usually investigate the nature of occurrence and spatial or temporal variation of something. Explore your area with these activities!

Tag: category:monitor

Hi Gretchen,
These are excellent categories. This is really well thought out. I tried it out (so maybe category:verify?) on some of my notes:

Applying the categories to actual research notes takes some careful study. Deciding between Verify, Observe, and Test-tool-limits can be tricky. The other categories are easier. Is it valid to use more than one category for a single research note? Many of my notes could use two or three.

The Observe category seems easily confused with the Field-test category. I'm not exactly sure what the intent of the Observe category is. Maybe the Verify and Test-tool-limits categories should be combined because many people will probably confuse them. Maybe Observe should go in with those two?

Another question is whether these categories will be used only on notes about official Public Lab tools, or could also be used on any related tool that somebody reports on. It might get confusing if people used these categories to tag random things they were working on.

This is cool,
Chris

Is this a question? Click here to post it to the Questions page.

any related tool

On staff we've started talking more about "methods" rather than "tools" to acknowledge more clearly that there's a big difference between a hardware build and a more complete methodology you do with it (and a specific aim and use case). This is part of the general shift we're hoping to encourage to better-documented activities.

That said, the categories are intended not only to apply to existing posts, but to inspire and shape the re-formatting of existing posts into more clearly easy-to-follow activities, rather than more passive accounts of simply "what I did." In many cases this might more easily mean posting a followup post that rehashes the steps originally presented, but more in the format of a guide.

But for new content, we're hoping for more "Here's how to do this thing I did, try it and tell me if you got the same result, and if I can improve this guide" as opposed to "check out this random thing I'm working on" -- though the latter is still of course welcome insofar as it can be presented more in the format of the former.

As to the observe/field-test categories -- I think this is true, and agree that there are other categories that could be confused or potentially merged. One reason we put out all these together was to see what people think, and what sticks, basically. In general I hope people clearly state the goal of their activity -- and we will probably need to help/remind people to distinguish between verifying that something "works as expected" and testing the limits, like "how high a resolution can I get".

Hi @cfastie,

Great comments. For the verify vs test-tool limits, I can definitely see how their distinction can be unclear -- maybe you can help me articulate here. I think it's important to distinguish between "this is something we know this tool can do if it is built properly" (tag:verify) versus "let's try to find out just how well this tool can do X" (tag: test-limits). In the former, we're asserting that we already know something and to ensure you built the tool and/or followed the method properly, you should be able to do this too. In the latter, we don't know yet, so let's try something and see how well we can do and whether or not others can replicate it. Does that distinction make sense? Do you think we can more clearly articulate it in the wiki about activity categories?

As for the "observe," that is mostly a category for research notes that really don't qualify as an experiment, but the person did something and want to know if, when other people do the same thing, they observe that too. Field tests can range from this sort of observation (e.g. "hey, I left this thing out in the creek for two weeks, and I saw a ton of algae grow on it") through experiments (e.g. "I deployed this thing for two weeks at depths of x, y, and z in a stagnant lake and in a flowing river, and I saw this gradation of algal growth etc.") So, I think that for field tests especially, they can and should be double-tagged.

In general, do you think more or fewer categories will be more education and/or thought-provoking for people, especially those new to research?

Thanks! G

Is this a question? Click here to post it to the Questions page.

Thanks, that's a good distinction between verify and test-limits.

I think part of my confusion about Observe was due to the order you used to present the categories. From your new description, I would expect Observe to be listed right before Experiment.

The primary obstacles to implementing these categories are:

1. Many research notes touch on more than one category, and some fall into four or five categories.
2. It takes some effort to assimilate these categories and then additional effort to figure out which ones apply to a particular research note.
3. I think a very large proportion of note posters will get it wrong.

There are two approaches to dealing with these obstacles:

1. Reduce the number of categories.
2. Have a curation committee assign the categories to notes.

Accomplishing the goal of making the quality information in research notes more accessible to the public is probably going to require curation. To start with, somebody has to decide whether a note should be included in the "helpful research" category. The note poster should not have to shoulder that responsibility.

Chris

I see @cfastie's point about expecting Observe to come a little later in this list. Could we change the order we list it in?

Also @cfastie, i agree it's a lot to ask people to tag this to their own notes. There's a "tagging support" interface being worked up to help people understand these and see their research in a larger context. I'm guessing we'll probably be working up some orientation materials as well in the posting form and in the signup confirmation.

In terms of most notes fitting multiple categories, maybe now and moving forward, people can be more modular about how they post, so that it's clearer to others what can be built upon, and how.

What do people think about that?

Is this a question? Click here to post it to the Questions page.

Another idea I had was that existing (and possibly longer) posts, rather than needing to think about which category they fit into, and how much work it'd be to revise them into fully-fledged stepwise activities, could have follow-up "responses" which distill out a portion of the original post as an activity. The smaller chunks in each activity might reflect individual categories, whereas the big narrative over-arching post would have been multiple. Then we could say "see these activities to try replicating some of the steps in the post above."

This seems more feasible than reworking large portions of historical PL posts to fit the new activities system anyways! And we could provide a prompt on some posts like "post a follow-up to break out specific activities from this post" or something. It won't always be the case that the original author is the one to create the activities -- someone else might try to distill out steps from the original work.

Also, as we're thinking of how to help people organize and present their work more clearly, I was interested in the way nature.com shows sections of a paper in collapsed view, especially nice and browsable on a smartphone. See this example page: http://www.nature.com/articles/srep24873