Public Lab Research note


Comment piece in Nature about peer review

by liz , warren | April 21, 2016 16:01 21 Apr 16:01 | #13010 | #13010

liz was awarded the Basic Barnstar by warren for their work in this research note.


Jean-Baptiste Colbert Presenting the Members of the Royal Academy of Science to Louis XIV (oil on canvas), Henri Testelin (1616–95)/Bridgeman Images

Posting a link here to a Comment piece in Nature from earlier this week: http://www.nature.com/news/peer-review-troubled-from-the-start-1.19763

Here's the quote that prompted me to post this: "one thing stands out: pivotal moments in the history of peer review have occurred when the public status of science was being renegotiated."

The timeline illustration in the article:

2016-04-19-Nature-HistoryPeerReview-01.png


43 Comments

I went through this for an autonomous navigation paper that I was trying to get published.

Because the topic (navigation for robots in general) fell in between several distinct categories (auto industry, where self-driving cars are the rage; electrical engineering, where the nuts-and-bolts of any system lives; and computer science, which is where the theoretical basis lies), it was up in the air which field would be the best place to put the idea forward.

I had enlisted a mentor (one of my graduate school professors) to help out in getting the paper into shape (he needed to bolster his publication creds for his own career, so it was a win-win). Being from the auto industry, he recommended we try the to narrow the paper to automotive engineering publications.

Unfortunately, what followed was months and months of chasing submission deadlines, rewriting to catch up with the topics-of-the-month for a particular conference, and no publication. I started digging into the peer review process to figure out what was going on, and articles like this one began to pop up on my radar.

I finally decided that the peer-review route wasn't for me. I didn't need the paper for my own career, and my former professor had several other irons in the publication fires that were going forward. So I decided to go rogue and just push the paper up myself to researchgate.net.

That was one of the reasons I decided to get involved in PublicLab--I'm not "against" peer-reviewed publications, but I believe that a lot of good work can be done by citizen-scientists. Someone who has an interested in basic sensor engineering, another person who loves frogs, and a third person with an interest in chemistry can all put together a data set that might tell the story of how pollution levels in a large watershed are affecting the local biosphere--all without any of the parties entering into a formal partnership.

(This is skipping over the work that others can contribute, too: Data/statistical analysis, collation of the results to tell a story, etc. There are lots of ways that people with their own interest can contribute to the whole.)

A formal study would require recognition that such a thing needs to be studied, then you would have to formalize the study methods and find people willing to tie their careers (via study results) to the work and the publication. Such a thing is possible, but very much less likely.

It's my own view that PublicLab can fill these gaps in science, not necessarily by replacing the peer-review system, but by augmenting it. Articles like this, I believe, identify some of the frustrations with the formal system that would be very daunting to a citizen-scientist.

Reply to this comment...


Wow, "citizen scientist"! I have never thought about it that way. But you are right. I dived into PLab spectrometry by chance and found that it really sparks my interest in basic physics. How does a diffraction grating work (ask Mr. Feynman!), how does a webcam work, how can we make cheap spectrometry more stable? These are only some of the questions I asked myself during this ONE month of being here, of being part of the PLab community. And I am well aware that other guys here have other interests like hunting down oil polluters or doing bee science. I don't know any member here from the oil testing front personally but I do hope that some day someone here may find a solution to HIS research problems within one of MY thoughts about my research problems. We don't have to know each other, we don't have to work on the same problem, we just have to share what we find out. And thats so great about this community.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


"Citizen scientist" is a phrase I picked up along the way (I think here, but I've also seen it around the web). I think it's a good description of the approach by PublicLab's members, and it seems to be a good formal phrasing of the same concept that is behind a lot of "informal" work done by regular joes and janes who have a keen (and possibly inexplicable) interest in a particular technical area. The Maker movement, countless Youtube channels, and the whole underground garage gen-mod and nuclear research groups are just a few examples.

These kinds of activities have always been around (read about the origin of the Stirling engine sometime), but with the brave new networked world that we live in, it is entirely possible to help turn them into something new--or, at least, give them a boost.

Reply to this comment...


Well, I'll chip in an alternative $0.02. I view formal / official peer review, the peer review 'scientific' concept, the peer review concept and engineering as related but individual. The formal peer review ('sign off' by authorities) provides an authoritative reference point, the peer review concept as science provides a process to filter out bad science and engineering provides a practical framework to 'peer review' by way of success and failure to achieve a result which is grounded in the rules of good science. All of the above is, and must be, well grounded in the scientific process and principles. As yet, I've not been able to grasp a definition of 'citizen science' as it cannot simply be a 'democratization' of 'science' as those two terms are at odd with each other.

Reply to this comment...


Great topics in here! I wanted to invite you all to the (not yet publicly announced) upcoming May 2, 8pm, OpenHour "hangout" on "Public Lab's research culture". It will include the themes/points that people have been discussing on the listserves and in the wiki on research documentation. Still wrangling the topic, but there seem to be three intertwined strands: refereeing information, documentation standards, collaborative process. How does this sound?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


@stoft,

I agree with all of your points regarding peer review (by whatever phrasing). You're also correct about the term "citizen scientist", in that the term itself is prone to wildly different connotations: My meaning might be more along the lines of "people with expertise or interest in a subject for which they are not formally recognized", while someone else might take the phrase as implying "revolt against corporate-, academic-, or government-centric control of research efforts".

Same phrase, completely different interpretations. I'm all for a better terminology, especially one that leaves the emotional baggage behind. Sadly, I haven't found one yet.

I tried to navigate that gap a little bit by mentioning "augmenting" peer review processes, as opposed to replacing them. I think that the work here is important in a bigger sense; having a culture war can flame out a community faster than anything.

@liz,

I think that is a great idea. Those points, especially, get to the heart of the discussion, in my humble opinion.

Reply to this comment...


I found this section really interesting and reflective of some of the balance we're seeking here too:

http://www.nature.com/news/peer-review-troubled-from-the-start-1.19763

Origins of peer review in 1800s: "Most fundamentally, they argued about what a reader's report ought to be. Whewell wanted to spread word of the discovery and to place it in the bigger picture. “I do not think the office of reporters ought to be to criticize particular passages of a paper but to shew its place,” he told Lubbock. If they picked out flaws, he warned, authors would be put off. Lubbock had other priorities: “I do not see how we can pass over grievous errors,” he wrote."

For my part, I don't think we need to debate whether to do peer review at all, rather how we adapt ideas from journal-based peer review into a more open source and empirically-based environment which could depend less on credentialed forms of authority for rigor and credibility.

A tough problem!

Reply to this comment...


@david-days, right. So, I have a theory ... right place for theories, right? ... that having a diverse source of contributors means just as diverse "level" (broad definition) of "material". Therefore, it's like painting a tree where the challenge is how to filter that material into the appropriate branches, twigs, leaves, etc of the tree. All can be useful (even those that form examples of wrong turns) but only if placed in the correct location. Right now, it's a bit too similar to hearing cats.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


@Warren, I don't see that anyone is suggesting credential based peer review for PLab but IMHO, that does not exclude filtering based on good science. Two papers, on the same topic, with one showing clear data, good methods, understandable and supported conclusions needs to be 'positioned' in some way as separate from the other which has clearly flawed data, lacking in explanations and drawing conclusions which are not justified by the data or methods. An "outside" reader should have the clear impression that the repository and work it contains, as a whole, is clearly organized and identified on the basis of the scientific process. This goal does not diminish asking questions, contributing observations (even if flawed), suggesting new ideas, etc. -- it just has the responsibility and mechanisms to put that material in an appropriate context of good science.

Reply to this comment...


ah @stoft, am i hearing correctly that this is also about navigation through "the tree" in your metaphor? @Gretchengehrke has been very articulate about the need to organize content. I will add this as a fourth strand to the upcoming OpenHour.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


@liz, right. A simplistic example is the 'tree' of the web site with one section being 'research'. It's fine to see 'new posts' listed (just labeled as such in bold) but then subsections of of research topics with material that has been 'filtered' so the reader, on going there, doesn't just find the same heard of cats that they find on the intro page -- though there could be an 'ancillary' subcategory of as-yet un-filtered material ... some of which may periodically moved to the 'filtered' spot. If I'm searching for how to map my neighborhood park, or accurately take a water sample, etc. I shouldn't have to wander though piles of unfiltered material and, especially, not know if what I'm reading has been "reviewed" in some form so I won't be randomly lead down some 'blind alley' ...... and, from an outside perspective, doing otherwise (or nothing) appears very unprofessional.

Reply to this comment...


@stoft - ah ok, good, then were on the same page there. I guess the question is then

  • who can make such assessments
  • how do we trust the assessments they make
  • how does one express disagreement with an assessment
  • what do the assessments look like and what are our guidelines for making them (we've made some progress on this in other threads too)

Reply to this comment...


@stoft, I agree with what you are saying, and have a personal interest in helping to make that happen. Without a means of narrowing down PLab searches in a way that highlights useful/relevant/valuable material, it's no better than hit-and-miss on a Google/Bing search (not knocking them--but they are built for the general web, not to support PublicLab's goals).

OTOH, how you do it (and what effect it has on the community as a whole) is what @warren is talking about.

I think that everyone would agree that this is fundamentally important, from whichever part of the rigor spectrum you approach this. I'm looking forward to the OpenHour discussion.

Reply to this comment...


I do agree that "someone" should try to organize content into categories, but I thought that's what the tags were for?

As for the filtering of "good" and "bad" science I do have some reservations. Perhaps you can have a basic research here where everybody takes part and then "promote" the more sophisticated papers to be reviewed by a kind of "science board". (Perhaps you can get retired professors for this?) But if you filter things right away it would

a) scare away newbies like me who don't know much about "the real scientific practice" but might have the one or other idea or perspective to add to certain questions here in the forum

b) keep away the experienced guys from the unexperienced as the experienced guys will stay and discuss things among themselves more. The day I started here in the forum I got contact to experienced users who helped me with my first steps and I felt welcome. I browse through papers that interest me and when I don't understand something or think I might have an idea I just post a comment. Perhaps many things I say are naive. I could not go into any institiute and ask a professor about webcam noise. I would not even make it past his secretary. But here I can ask my questions and so far I got really good answers. I fear if you filter too much you might build up walls.

You know, there might have been a little boy asking the great Newton: "Why does an apple fall down but the moon doesn't?" And perhaps that little boy's question made the big guy think...

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


@Warren, good and glad to see this list.

1 - Foremost, there should be a set of categories (similar to the brief list in the 'blank' note for suggested things to write) which help assure good documentation. eg. I try to use some consistently, like: abstract, disclaimer, observations, data, analysis, hypotheses and conclusions if any.

2 - Any submitted material has to make "common sense". I know this is broad, but, for example, if a doc doesn't make sense to several 'reviewers', then it can't be 'accepted' unless that confusion can be eliminated.

3 - All material must follow a logical explanatory process. It can't start at 'A', then digress through 'B' and 'C' and conclude with 'D'. That is simply contrary to the scientific process.

4 - The document must not contain conjectures, guesses or speculation -- unless they are identified as such and have some added value to the document.

5 - The material should be relevant to the topics within PLab's scope. This doesn't exclude new topics and new ideas, it just means material shouldn't be obviously distracting or totally inappropriate to PLab.

6 - To avoid adding overhead, I'd suggest having a 'submission bucket' where existing and new docs can be identified as potential candidates.

7 - Since this is a PLab repository, I'd suggest there is therefore some upper abstract level of responsibility of 'PLab' to solicit specific help for reviewing what's in the bucket(s). It doesn't seem practical to ask or expect the herd of cats organize themselves.

8 - I'd suggest that such selected reviewers then contribute, as best they are able (time, money, coffee, etc) to add/checkbox/notes to a common 'whiteboard' form (not public to the general herd to keep the noise level down) to seek consensus.

9 - The result could be acceptance, acceptance with suggestions to the author, feedback to author with acceptance, pending on correction and removal from the 'submit bucket' (not removal from PLab notes in general).

10 - Conflicts. Yup, they will exist, but with the scientific process as a guide for all, they should, for the most part, be logically resolved.

11 - Even if this process doesn't work 100%, that's ok as 75% or whatever is likely to be a big improvement.

Once the above is filter has passed a document, then it can be bumped up for requesting

Reply to this comment...


Hi everyone! I wrote up a description of the upcoming OpenHour and put links to this note as well as to the mailing list discussions, wiki pages, and other notes where this discussion has been happening. Please check it out! I tried to list where ideas came from certain people, please edit if needed https://publiclab.org/openhour

Reply to this comment...


@stoft - great start. I guess implicit is that anyone could go down a checklist like this as part of reviewing. Obviously lots to discuss, and may want to hold off going into too much detail until the OpenHour (so there can be broader discussion of these ideas by the community), but just to think ahead, how might we prompt people to take on this kind of "checklist"?

And what are some things we could do to some of the risks @viechdokter highlights, which I take very seriously -- that there'd be further stratification of "insiders" and "outsiders" -- potentially in language, communication channels, implicit/assumed authority (liz probably has more to add here). I think we could do something like this, but want to think through how it could play out and plan ahead for such dynamics -- and consider all alternatives.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


@viechdokter, my approach would be to apply the "filtering" on the search end of the system (but I am a software developer, so my approaches tend to be IT-driven). That way, there would not be any discouragement in posting questions, research, data, notes, etc for those who would be intimidated by having a gatekeeper up front.

To support the "ranking" system, however, there needs to be more than tags. Tags are a good start, but all they really do is simplify keyword searches. If I create/add a keyword ("best-research-ever"), it doesn't really mean anything unless someone both knows to search for that tag and agrees with the intent of the tag.

Other community-based organizations use rankings, "likes", etc. to lend credentials to a particular publications--this can be use in the results, but there will need to be a means of making those notations. (And yes, there are plenty of examples of abuse of those systems, also.)

Conversely, @stoft is suggesting a more formal gatekeeper up front; that is, submissions must be properly vetted before public availability. This has the effect of having strong scientific rigor (as strong as the reviewers can manage), but also has the potential effect of intimidating submitters who have no formal training, or can't/won't take the time to formalize the results.

Then there's the gray area: If I take the time to collect spectrometry readings from the local watershed (or any equivalent) and post the data , how does it fair under either system? It's just data and/or an example of usage for a PLab project. It would be difficult to find in a completely open system, but it's not going to meet any substantial forms of peer-review rigor.

I would suggest that this discussion matters in a lot of ways beyond the purely philosophical.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


@Warren, yes, I do acknowledge your points and @viechdokter 's. I would differentiate by noting that I'm not suggesting to disable or remove the existing 'new research notes' posting area. What I am promoting are separate, but directly accessible pages, which contain material that qualifies as material that is more focused, more scientific (or engineering, or construction, ...) which have been "accepted" as meeting some basic criteria.

I agree, that there is the concern about wishing to be welcoming to anyone expressing an interest and about not being 'too exclusive'. However, science (ideally) is not about feelings; it is about curiosity, learning, observation, data, and verifiable fact which should not be diluted by illusion. Science is also all about learning -- which is at a unique stage for everyone. However, that is orthogonal, to this topic, for PLab. PLab's outreach is already organized to be wecoming -- now it's just time to organize what arrived as a result of that effort and that organization should be on the basis of science. Make sense?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


@viechdokter suggests that I'm proposing a 'gatekeeper' but that's too simplistic and I'm not. Sorry for any confusion from what I wrote. Sure, keep the existing open submit mode -- it is a great opportunity to welcome new ideas.

I'm suggesting that there needs to be a separate, yet directly visible, page(s) for 'reviewed' notes that original were in the general 'open' category. A new visitor would find recent notes but also a list of sub-topics where 'filtered' notes are placed. This allows the visitor to visually browse on-topic notes with sufficient documentation to be most helpful. In this case, search can only provide what has been pre-filtered by humans.

Reply to this comment...


@stoft, my apologies for misinterpreting your comment. From a software perspective, having an additional/parallel process to review (and "mark", accordingly) would fit right into what I was thinking.

Reply to this comment...


@viechdokter, no problem; it's a community of friends. I've written enough code to understand the value of diverse data; algorithms can only be as good as the data to support them. Tags, for example, tell potential relationships, but nothing about the degree of correlation. Cheers.

Reply to this comment...


Here is a proposal which might reconcile a few of the ideas presented here including:

  1. @stoft’s great idea of using a checklist to determine if a research note meets some minimum standard,
  2. @david-days reference to community rankings, and
  3. @Liz and @Warren's morbid fear of an accreditation body.

Proposal for peer review of research notes

  1. Establish a checklist of qualities desirable in a “research grade” research note. For example, rank these from 1 to 5:

    1. . Relevance -- to Public Lab’s mission, tools, hopes, and dreams
    2. . Rigor – How convincing are the conclusions? (Sample size, good technique, logical arguments, etc)
    3. . Reproducibility – How well are the methods documented?
    4. . Intent – Why was this project done? For fun? For self-promotion? To make the world a better place? To contribute to the Public Lab community?
    5. . Legibility – Is the information accessible to the Public Lab community? Are the relevance and conclusions clearly explained?
  2. Anyone can participate in the ranking of any research note. A publiclab.org account is needed, and the user name will be attached to every ranking. Rankings are displayed with each research note. Every ranking must include all items on the checklist.
  3. Each research note will be evaluated with a simple rule (e.g., Research Grade requires no rank lower than 3, average total score at least 19).
  4. The ranking period is finite. After maybe two to four weeks, the status of the research note is locked.
  5. Research notes receiving no rankings are just like most research notes today, soon pushed down the page and forgotten.
  6. Appeals can be made in the comments of the research note. Duke it out with the community. If you don’t like the result, post another research note.

This type of process can be tampered with, e.g., by getting your friends to rank your trashy note. So the ultimate authority must rest with the Public Lab staff to intervene if shenanigans are suspected.

One weakness of this system is that the author can edit the research note during the ranking period, and some rankers might then want to change their rankings. So ranking a note is very easy and quick (several checkboxes) but it carries some responsibility to monitor the note during the ranking period for improvements or unacceptable changes.

Another weakness is that an author can edit a “research grade” note after the ranking period and revert it to a state deemed unacceptable by some rankers. But who would do that?

Chris

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


I think that @cfastie has a good point...or set of points.

One slight modification would be to put weights on the users and rankings. A combination of the user's own reputation and the scoring system. That way a "friends Mark you up" scenario has a much smaller effect than a recognized expert going one way or another.

Alternatively a truly useful piece of work can rise up by popular acclaim.

Even better, the work load is more distributed...public lab employees don't have to grind through their review queue every day, but they could adjust a reviewer's weight to moderate the system.

From a search result and sorting perspective this is just an additional attribute to be used to rank results.

This could work well with requests for review and verification or replication. Additional weights that modify the relevance of research and results.

Reply to this comment...


That's very clever that users with good reputations can counteract unscrupulous newbies gaming the system for their buds. It would be cool if users who publish lots of "research grade" notes have more power. But that produces an insider class which can throw its weight around. Even if the insiders have earned their powers (DC and Marvel need not apply), they still become a special class.

One ammendment to my rules: I proposed that any ranking below 3 (out of 5) would be a poison pill. That makes it too easy for any ranker to sink a note. That makes it more likely that a sunk note can be blamed on one person, and it is very easy to retaliate by returning the favor. So even notes with some poor rankings should stay in the running.

Reply to this comment...


@cfaste does have some good points; clear rank topics, finite review period and minimum rank numbers in some form. I'd also agree that pitfalls are difficult to avoid with any system. However, I think the large number of existing notes also needs to be considered. If the 'members at large' are expected to somehow 'promote' candidates, it is unlikely that random process will comb the entire database of material. That is, in part, why I suggested creating a 'selection bucket' or 'pool' of possible notes to consider. Searching for potential candidates is probably a more efficient process than doing reviews. Perhaps anyone could select any note for inclusion in the pool (with the promoter's name attached). This way, there would likely be a more reasonable subset to review and there would be a specific place to browse for candidates to read, review and rank.

One other thought triggered by these comments is that if a reviewer gave a rank of 3+, a brief comment (as to why the note should pass review) must be included otherwise it is too easy to just check boxes without real justification.

Reply to this comment...


Well, if you look at the math that would support it, it actually becomes a bit more nuanced. (We can always adjust).

However, let's suppose that we want a fairly open-ended ranking system--we don't want to put an automatic "god-level" that has too much power, at least until we get some experience under our belt.

So an individual review would be ReviewWeight = UserRep * ReviewPoints;

A total review weight would be averaged over the reviewer's reputation points: TotalReview = Sum(ReviewWeight) / Sum(UserReps)

A User with a reputation value of 100 gives a worse review than a reviewer with a reputation of 1000 (or vice versa), the higher-rep reviewer has more influence.

The UserRep value could be a sum of their own TotalReview results, Participation points (either automatically summed or "awarded" by PublicLab representatives), and even an adjustment value (either additive or multiplicative) to ameliorate or boost a particular user.

(The last case could be used for someone trying to game the system with research "spam", while also allowing additional weight/credence to a user who comes in or develops street cred.)

Reply to this comment...


Hmmm, my guess is that a realistic system for scientific material from such diverse sources cannot simply rely on a set of automated rules and search algorithms. At a minimum, such a system is inherently unstable under conditions which cannot be known apriori. Even just to prevent / cleanup the odd cases will require human oversight by 'PLab'. Even with game theory, it's extremely difficult to design a system which satisfies both technical standards and assures that everyone is motivated to 'play fair' WRT the same goal. It's not the creation of an 'elitist' class, it's to place common sense boundaries on a partially automated partially intractable problem.

Reply to this comment...


@stoft--I think that is a very good summary of the articles that @liz and @warren posted as references. It is also a good reason to at least try to create a system that supports PLab's mission.

Some of the "give" to create a balanced system will have to come from both sides of the spectrum, I suspect. To keep expanding on the "citizen scientist" (sorry--I'm still looking for a better term) participation, it should be pretty open for people to use and post results.

At the same time, however, good science (both as a process and as a goal) would seem to require a way to boost well-done research and at least flag or notify something that is questionable.

I'm kind of warming up to the weighted review idea--it also works for replication and verifications (just two other types of reviews). There are a lot of fiddly details that would need to be worked out, but the final arbiters would the PublicLab folks.

Reply to this comment...


I had not considered legacy research notes, but I have a different view of them. Implementing some form of review has two main consequences: it separates the wheat from the chaff, and it encourages people to start making the wheat better. So dangling the prospect of research grade notes will raise the bar for some note posters. If the eventual minimum criteria for research grade notes are anything like the ones being discussed here, research note quality could blossom. With those new criteria, there may not be many old notes that qualify.

So in a sense we are making a fresh start. However, there are certainly a handful of old notes that should be highlighted. With 20 minutes of time from 10 particular people, we can probably come up with a list of most of the old notes anybody should care about. These can just be given ad hoc approval and maybe a special label (Fine Vintage Research) and be listed and searchable with the new crop of research grade notes. Whenever someone finds another worthy note from before the ranking system, those 10 people can vote to slap the special label on it. This is the dreaded accreditation body, but it’s okay, it’s just a bunch of old research notes.

Reply to this comment...


@stoft’s concerns about the effectiveness of a crowd-based and rule-based ranking system are probably justified because the active publiclab community is too small. Most research notes get no comments, no likes, and no barnstars. If an open, crowd-based ranking system was implemented tomorrow, I could probably name the two dozen or so people who would do most of the ranking. With numbers that low, artifacts will be common. The two dozen people on my list will have to be vigilant and rank any note that is trending inappropriately. My list then becomes an accreditation body, and all is for naught.

If I am wrong and many people join in the ranking process, there is a good chance that many of them will not share the same understanding of the importance of rigor or reproducibility, or even intent or legibility.

But it might work well enough to give it a try.

Reply to this comment...


I agree with @cfastie that such a process should encourage contributors to improve upon documenting their work and I think "PLab" can play a leading position in that trend. However, I also think the concept of a subset of all PLab 'members' as a review committee is both acceptable and necessary -- largely because it's the most efficient means of achieving critical-thinking on the filtering process. Many contributors are, in addition to their interest and enthusiasm, actually quite interested in feedback as well. I suspect a loosely-knit PLab version of an "accreditation body" is what is required to efficiently accomplish the organizing task and that such a group would neither be 'chartered' nor seek the same kind of roll that exists for science journal publishing. This is PLab, not Nature or JAMA we're talking about. The objective is not exclusivity or elitism but to raise the bar of aspiration toward better science and train those who aspire but do not yet understand. When there are anomalies, then deal with them openly; otherwise, get the job done and spend most of the effort to support the enthusiastic contributors to learn and contribute more science.

Reply to this comment...


I still wonder... if the little boy coincidentally meets Sir Newton in the garden and asks about the apple and the moon he might just get an answer (or make the genius think about it). But I don't see Sir Isaak reading a little boy's school essays on a regular base to rank them...

@cfastie: Your ranking proposal gives me a bit of tummy ache:

.............. a. Relevance -- to Public Lab’s mission, tools, hopes, and dreams........ If anyone ever had ranked the first scientific papers on radioactivity, semiconducting materials or even fire for relevance at that time we wouldn't have PET-scanners, computers or well-done steaks. I believe that science comes from curiosity and my curiosity can probably be of no relevance for others. But I might stumble across something that might become relevant later on. Newton, Einstein, Dirac did not "make everything up", they just put things together very clever. Things that were not sooooo relevant until then.

............b. Rigor – How convincing are the conclusions? (Sample size, good technique, logical arguments, etc).......... I just read a note here on SWB about ethanol/NaOH dilution of fluorescein I think. I read about conjugations, about protons and so on, which was way above me. (My last school chemistry lessons I had 30 years ago.) How could any community member without such a chemistry background ever rank sample sizes and so on? My guess is, the scientifically better the papers become the less people read and understand them so ranking is not really a public thing.

..............c. Reproducibility – How well are the methods documented?................ That is a very good point. For instance, I started to put my data sheets into the research notes, so that others can re-check the data if they want to. I would love to see more photos or sketches with annotations of the setup of an experiment so I would be able to do the same experiment to see if I get the same data. That would help reproducibility a lot.

...............d. Intent – Why was this project done? For fun? For self-promotion? To make the world a better place? To contribute to the Public Lab community?................ Intent? Does that play any role here? Where would you rank the Special relativity theory then? Or what was Darwin's intent? Annoying the church? I admit that I would like to know what the intent of a researcher was when he comes up with something new, but it should not affect the ranking at all. Most science seems to be curiosity driven (on the professor level at least). Would that rank below "making the world a better place"?

................e. Legibility – Is the information accessible to the Public Lab community? Are the relevance and conclusions clearly explained?............... Good point.

.....4. The ranking period is finite. After maybe two to four weeks, the status of the research note is locked......... In the 3rd century before Christ the Indian mathematician Pingala invented the first dual number system. It was not relevant at that time it was not intented to please anyone or make the world a better place and with a ranking period of four weeks it would never have made it anywhere... ;-) I know why you want to lock things at a certain point to be able to do rankings, but I fear that the "technical issue" here stops the "flow".

......Appeals can be made ....... Sounds like EU bureaucracy to me. You would need an appeals court or board then. BTW, how often can one user appeal against a ranking of one note?

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


@stoft: I support the idea of a review committee. Not for ranking but for advice. Makes much more sense in promoting the quality stuff and in advising the others how to improve their work. A gentle push here and there and an open talk about ideas and concepts let's things flow better than any ranking could do. For me the human factor is much more important than mathematical expressions of the quality or weighing of a paper. I remember the movie "Dead poet's society" where a teacher wanted to rank poetry for "importance" and "language quality" and put poems into a diagram. That's what I fear might happen here, too.

Reply to this comment...


@viechdokter, perhaps even here, terminology can be problematic. Example: 'Ranking'. There really are many interpretations and implications of the word. So, I'm going to suggest that the process behind the term define the term. In this case, I'd suggest the process is primarily to "sort" documents which meet PLab requirements such as on-topic, clear, supportable, complete, factual, etc. Yes, some note may be much more professional than other notes, but that range of presentation can all still be "accepted" under the umbrella of a thoughtful scientific process. In parallel, when a note has value but is missing description (for example) then the feedback to the author should be suggestions for what to 'fill-in' to make the work both more usable, of greater value and meet PLab's goals. As I see it, PLab is dedicated to open and availability of science but that does not mean they cannot or should not maintain standards. I agree, "ranking" of art is a silly concept -- but here we're discussing quality of science.

One possible result of this process is the creation of a "charter" of how this 'filter' is to function and what the goals are. If that is open and transparent, and most agree it fits PLab's own charter, then the process may provide real value.

Reply to this comment...


@viechdokter,

I would put in @stoft and @cfastie's description as an additional process, which makes it immediately backwards compatible.

By "additional process", I mean that, wiki and research entries go in as usual. Inside the database is a set of tables that hold the reviews for each entry. An entry may have zero or more reviews. If an entry/research note/data set has no entries, it is presented the same as everything is now on a search.

However, if an entry has one or more reviews, then a search that is sorted by "review rank" will place it appropriately, based on everything that is discussed.

Users who wish to focus on the peer-reviewed and -backed work will tend to look at the "review rank" sorting, while those who wish to see a wider view will have everything sorted differently.

That, at least, is my suggestion. Most places make it one or the other, so you have Youtube (everybody puts up everything) vs. the online JAMA (where it takes a long time to get through the gate keepers...but it is certainly rigorous).

Reply to this comment...


However, science (ideally) is not about feelings

I just wanted to clarify that the goal of encouraging new contributors is not in order to preserve their feelings, but -- by being welcoming and accessible -- to broaden participation and further our research. Thanks for your thoughts though, and I'm 100% in agreement that "now it's just time to organize what arrived as a result of that effort"!

@Liz and @Warren's morbid fear of an accreditation body.

I know you're half joking, here Chris, but it's not an irrational fear -- I think we're just pointing out an assumption some are making that there is a clear shared metric for "recognized experts" -- which I don't think is well defined in our community. The concern is not that there might be an accreditation body, but that whatever review process we do develop will be fair, empirical, transparent, and... perhaps not unduly favoring of a specific cultural or professional background?

rests with the Public Lab staff

Before giving the staff more privileges, we might look to the new/emerging moderators group for this sort of thing :-) -- https://publiclab.org/wiki/moderation

can edit the research note during the ranking period

I'd like it if folks tagged/marked their work as "draft", shared early, and specifically invited comment, further opening the process, before final "publication". That would change the schedule and proposal here a bit.

an insider class which can throw its weight around

I agree with Chris on the need to watch out for this. The potential negative is if that insider group is hard to get into, or un-receptive to change/criticism, or begins to rely on its insider status for legitimacy rather than empiricism/transparency/rigor. This is very related to the liz/jeff "fear" chris described :-)

One way to try to counterbalance this is if reviewers must explain their rankings publicly, and be held to a rigorous standard to support their criticisms. I.e. that it not be enough to say "that's not how it's done" or something.

I have an idea, building on some of @viechdokter's good points, that touches on a lot of this. What if we considered the various goals of a review process, and emphasized one in particular that's been mentioned -- that a review body would actively encourage and support good research writing and documentation. Instead of thinking of it (and structuring it as) a group solely for judging the merits of completed work, we could frame this as a group which uses structured feedback (perhaps along the lines of Chris's proposal) to provide constructive feedback for people to refine their "drafts" into more fully fledged notes. It could be called a "publishing assistance group" or something better-sounding but along those lines. Review assessments (and a listing of great notes) would result, but they would just be one output; a major one would be the support of new contributors in writing great, rigorous, replicable research.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


@Warren's thought about 'publishing assistance' is a good one and one which inherantly contains a filter mechanism. Orthogonal to a list of topic-based categories (spectrometer, mapping, water, etc.) is the 'type' (yes, a rather overloaded term) of work -- in this instance I'm referring to data analysis, vs engineering a design, vs construction technique, vs measurement accuracy, vs etc. Each will have a different degree of technical challenges but all have the need for solid documentation in common. So, in terms of "acceptance" criteria, all are subject to the requirements of including all the necessary document components, all must show a logical path through what is presented and all must show support for any conclusions drawn from what is presented and all of this is rooted in the principles of good science, good engineering and good communication. In addition, this general goal encourages feedback to the author.

Reply to this comment...


Hi folks, I'm late to this comment thread, but am really excited about it. I generally want to support and echo many of @stoft's comments, particularly "The objective is not exclusivity or elitism but to raise the bar of aspiration toward better science and train those who aspire but do not yet understand. When there are anomalies, then deal with them openly; otherwise, get the job done and spend most of the effort to support the enthusiastic contributors to learn and contribute more science" which is also in line with some of @fastie's and @warren's visions for peer review in Public Lab. I wholeheartedly agree with this and think that framing peer review in a capacity-building sense, making "teachable moments" are core piece of the review process. What would you all think about a process like this:

  • People self-identify their note type / intent (because, @viechdokter, I do think intent matters and helps people know what kind of feedback is useful -- is this a fun idea where you don't necessarily want feedback on rigor? are you trying to identify chemical interferences and you want nitty-gritty detail questions? etc)

  • People self-identify if they want peer-review, and if so their note is marked "research-grade draft"

  • The peer-review process is transparent and includes constructive criticism, and numerical ranking for a rubric of factors

  • If necessary, the author implements the changes they deem necessary and sends back to review for an updated numerical rubric

  • The note is then published as reviewed research-grade, with the post-review rubric at the bottom of it, and a link to the draft and first-draft rubric. That way, the final version is most visible, but the draft and review can be used as learning opportunities for anyone.

I'd like to see perhaps two bins on the dashboard: reviewed research notes, and general research notes. I don't think you should have to specifically search for reviewed notes (as I think I interpreted from one of @david-days' comments), because that likely wouldn't be apparent to newcomers. I'd like a place for all research notes, but a visible distinction for those that have been reviewed.

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


I like your approach and like keeping the top-level choice up front. I'm also thinking that perhaps there should be a pre-defined list of categories as not all submitted notes are research; they range from just questions, to single observations, to construction sequences to piles of data without proper annotation. This also suggests having an 'Other' or 'Misc' category for entries which do not readily fit the pre-defined list. However, those could all be tagged either as 'request for review', or not, by the author.

Reply to this comment...


Reposting resources that OpenAIRE sent out last week:

For this last day of Peer Review week, OpenAIRE is releasing a new report detailing three experiments in Open Peer Review (https://doi.org/10.5281/zenodo.154647). All readers are invited to give their open feedback here: https://blogs.openaire.eu/?p=1269

More details are below.

Best, Tony Ross-Hellauer on the open-science mailing list

From the OpenAIRE blog: https://blogs.openaire.eu/?p=1269

As part of its mission to further Open Science and investigate how openness and transparency can improve scientific processes, OpenAIRE has been conducting a range of activities investigating the new models of peer review to literature and beyond that fall under the term “Open Peer Review” (OPR). Amongst other activities like our OPR workshop, stakeholder survey and (ongoing) attempt to formalise the definition of OPR, in late 2015/early 2016, OpenAIRE played host to three innovative experiments that aimed at promoting experimentation in OPR, studying its effects in the context of digital infrastructures for open scholarship, and investigating ways in which OPR technologies might integrate with OpenAIRE’s infrastructure. The three experiments were diverse in their aims and methods:

§ Open Scholar and their consortium aimed to turn repositories into functional evaluation platforms by building an open peer review module for integration with Open Access repositories and then implementing this module in two high-profile institutional repositories. § The Winnower sought to integrate the Winnower platform with repositories like Zenodo via DOIs and APIs to facilitate open peer review of repository objects, as well as offering financial incentives to encourageopen participation in the sharing of ‘journal club’ reviews and documenting user experiences via survey. § OpenEdition aimed to use open services such as the annotation software hypotheses.org and OpenEdition’s platform for academic blogs to model a workflow (selection, review and revision) that would develop blog articles into peer reviewed publications in the Humanities and Social Sciences.

The full report of the outcomes of these experiments is now available via Zenodo: https://doi.org/10.5281/zenodo.154647

We would very much welcome and feedback on this report from any and all interested parties. This could take the form of a formal review of the report as publication, a comment on your judgement of the value of the work contained therein, or just a quick note to advise of any formatting/language issues that should be addressed in any future version.

Please make use of the commentary function of this blogpost to leave your feedback on the report. All comments will be gratefully received: https://blogs.openaire.eu/?p=1269

NOTE: OpenAIRE would also like to know what you think about open peer review! Take part in our survey until here until 7th October!

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Here's a link to a journal article from @philipsilva:

  1. Introduction This paper will provide an historical description of the development of the scholarly journal article which, in many disciplines such as science, technology and medicine, is the primary accepted means of formal communication - comparable to the automobile and the bicycle as a means for private transport. Will this change and how quickly?

http://docs.lib.purdue.edu/cgi/viewcontent.cgi?article=1389&context=iatul

Is this a question? Click here to post it to the Questions page.

Reply to this comment...


Reply to this comment...


Login to comment.