I spent today at the CivilServant summit, where Nathan Matias and others invited many different p...
Public Lab is an open community which collaboratively develops accessible, open source, Do-It-Yourself technologies for investigating local environmental health and justice issues.
Public Lab chatroom
Reset your password
Read more: publiclab.org/n/15627
I spent today at the CivilServant summit, where Nathan Matias and others invited many different presenters for the day around the topics of moderation and collaboration -- CivilServant.io is a platform for comparing and testing moderation systems -- and a discussion of tools for facilitating and moderating online discourse. The event was funded by the Knight Foundation, the MacArthur Foundation, and the Tow Center for Digital Journalism.
In opening remarks, Nathan mentioned a paper which I found pretty fascinating!
The Rise and Decline of an Open Collaboration System: How Wikipedia's Reaction to Popularity Is Causing Its Decline
by Aaron Halfaker, R. Stuart Geiger, Jonathan T. Morgan, John Riedl
I was super interested to see this graph, where the authors argue that the moment when Wikipedia shifted from growth to slow decline in active editors happened at the same time as the adoption of automated algorithmic editorial tools which more eagerly rejected contributions:
Open collaboration systems like Wikipedia need to maintain a pool of volunteer contributors in order to remain relevant. Wikipedia was created through a tremendous number of contributions by millions of contributors. However, recent research has shown that the number of active contributors in Wikipedia has been declining steadily for years, and suggests that a sharp decline in the retention of newcomers is the cause. This paper presents data that show that several changes the Wikipedia community made to manage quality and consistency in the face of a massive growth in participation have ironically crippled the very growth they were designed to manage. Specifically, the restrictiveness of the encyclopedia's primary quality control mechanism and the algorithmic tools used to reject contributions are implicated as key causes of decreased newcomer retention. Further, the community's formal mechanisms for norm articulation are shown to have calcified against changes -- especially changes proposed by newer editors.
I also heard via Karrie Karahalios about her work on racial and other bias in algorithmic systems, including a racist robotic beauty contest, racist and discriminatory automated "recidivism scoring", and other deeply disturbing stories of algorithmic misuse. She particularly pointed out this Obama-era report on Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights:
I was also interested in this tool Gobo, which lets you filter your own social media feed by different metrics -- instead of letting Twitter do it for you, for example. It gets at some of the non-neutrality of algorithmic filtering and the lack of agency of social media users in filtering on metrics that might be more socially or culturally relevant to you -- although it's not immediately clear how these metrics are evaluated: "politics", "seriousness", "rudeness", "gender", "brands", and "virality".
But watch out, it does require an IRB ethics "consent to be studied" since it's part of an academic experiment:
Not to bury the lede, but I was also struck by this study by CivilServant and the /r/science subReddit, which showed that:
Sticking a rule comment to the top of discussion threads increased a newcomer's probability of posting a first comment within the rules, from a fitted chance of 75.2% to a fitted chance of 82.4% on average within r/science
What does "rule posting" actually look like?
Is this a question? Click here to post it to the Questions page.
Here's an example from the /r/politics subreddit, and an example of it failing:
You must be logged in to comment.