**Welcome! This is the home for all things related to evaluation at Public Lab.** Many different feedback efforts are ongoing in different sectors and we try to coordinate our efforts to minimize survey fatigue or redundancy. @liz leads the evaluation team! See recent work related to evaluation [here](/tag/evaluation), and ask questions below to find out more. ## What are we measuring towards? [![LogicModel_headers.jpg](/i/24885)](/i/24885) **All evaluation is tracked against our [Logic Model](https://docs.google.com/document/d/1N2GnPoe2gaqb5eMWnJOdcdheSmL6KzbkIHUCeTMup4U/edit), and terms in Logic Model are defined in our [Community Glossary](/glossary).** The creation of this Logic Model, and the Snapshot Evaluation and Evaluation Framework based on it was generously supported by the Rita Allen Foundation (May 2015-May 2018), with additional support from the Listen For Good Project. ## Why we evaluate The Public Lab community intentionally works together to create a place where collaboration thrives. We collaborate on collaboration. We seek to collectively and publicly understand how we ourselves work together, and the systems, conventions and structures which shape that cooperative practice. To do this better, we need feedback loops that add to our self-awareness. The feedback we wish we could see includes additional stats about our community's activity, especially where there are gaps, for instance, community questions languishing unanswered, which can be heart-breaking when the topic is environmental health. We would also like to identify emerging topics in real time in order to better tune outreach; this helps us ensure that diversity stays high even as early adopters rush in. As Chris Kelty famously wrote of his concept "recursive publics," "[we] are the builders and imaginers of this space." This theme stretches across the FLOSS community, and increasing our self-awareness will help us eliminate our collective blind spots. As FLOSS publics strive to broaden in diversity and inclusivity, careful monitoring of where onboarding processes fail is critical. By watching channels and identifying people who connect with the community in one or more ways, we hope to become aware of the ways that people first connect with Public Lab, and what their second, third, etc steps may be. If there are not subsequent steps, what stopped people who had started to engage from participating further? **** ## How are we measuring? ### Community Surveys Formerly, a one-size-fits-all [Annual Community Survey](/notes/liz/06-13-2017/your-input-kindly-requested)** was delivered over email lists and posted on the website. 2017_Public_Lab_Community_Survey_.pdf. We have now replaced that low-response format with multiple surveys that reach specific segments of our community who are having shared experiences. * **People attending live events, in person or remotely**: we use a modified Listen4Good template with Net Promoter Score questions as well as 6 customized questions about our mission and respondent demographics. Delivered via SurveyMonkey. * **Barnraisers**, example 2016_Annual Barnraising_Feedback_Survey_.pdf, and 2017_Regional_Barnraising_Survey.pdf * **Software Contributors**, example: [2017 survey](https://docs.google.com/forms/d/e/1FAIpQLSeMFVQ4NNcNRzIAwsWY1bZrrQDIeVh3s399h8dkPzVJ2I-pHA/viewform), delivered via Github * **[Organizers](/organizers)**, example: [2017 Survey as GoogleForm](https://docs.google.com/forms/d/1jrBEmxB6oAnoEixJysqrJaY-xjMLr4vGIwcdMvR3Jnk/edit) and 2017_Organizer_Survey.pdf, delivered via email and personal direct outreach * **Partnering Organizations** update their activity every year on [publiclab.org/partners](/partners) **** ### Stakeholder interviewing **A series of stakeholder interviews was done in 2017! You can read them here:** [notes:series:community-interviews] **** ### Online analytics **Statistics on community activity are publicly displayed at http://publiclab.org/stats.** Experiment with customizing your own queries of publiclab.org activity by adjusting the DD-MM-YYYY in the URL, for example → https://publiclab.org/stats/range/31-01-2016/31-12-2016/ ! **Research into pathways through Public Lab's ecosystem is located at https://publiclab.org/first-contact**. **The ever-growing [Data Dictionary](https://docs.google.com/document/d/167Y-oW7oA4i9zwyr00uygQx7PaIU_uUJ2kVQU3lU6aE/edit#) describes the datasets that are available for analysis. Created by @bsugar, maintained by @bsugar and @liz.** **Topics include: ** * **Conversational dynamics on mailing lists: ** [![2017-07-12_mailing_list_activity.png](/i/24878)](/i/24878) * **Rhythms of community activity on publiclab.org:** [![Screen_Shot_2018-05-10_at_2.59.16_PM.png](/i/24886)](/i/24886) ### User interface design **See the [User Interface](/ui) page for more on design work towards user interface and user interaction improvements. This is an area where many people are offering feedback!** ### Other interesting views of the Public Lab community over time * https://publiclab.org/community-development * https://publiclab.org/stories **** ## Questions [questions:evaluation] **** ## Related work [notes:evaluation] **** ### Older page content From 2014 via @liz: brainstorming [possible community metrics](https://docs.google.com/document/d/1ZnmTco7zaizEelP1awDuWSTZ_tNg9NSpqPKM9XbdO-c/edit) From 2011 via @warren, interesting! Read on: On this page we are in the process of summarizing and formulating our approach towards self-evaluation; as a community with strong principles, where we engage in open participation and advocacy in our partner communities, this process is not that of a typical researcher/participant nature. Rather, we seek to formulate an evaluative approach that takes into account: * multiple audiences - feedback for local communities, for ourselves, for institutions looking to adopt our data, for funding agencies, etc * reflexivity - we may work with local partners to formulate an evaluative strategy, and this may often include questionnaires, surveys, interviews which we take part in both as subjects and as investigators * outreach - by publishing evaluations in a variety of formats, we may employ diverse tactics to better understand and refine our work; its publication in diverse venues (journals, newspapers, white papers, video, public presentation, etc) offers us an opportunity to reach out to various fields (ecology, law, social science, technology, aid) * location - our evaluations should be situated in geographic communities, examining the effects of our tools and data production in collaboration with a specific group of residents ##Goals## Good evaluative approaches could enable us to: * quantify our data and present it to scientific, government agencies for use in research, legal, and * provide rich feedback for field mappers (in the case of [balloon mapping](/tool/balloon-mapping) and other public scientists to improve their techniques * assess the effects of our work on local communities and situations of environmental (and other types of) conflict * involve local partners in the quantification and interpretation of our joint work * ... ##Approaches## We're going to use a few different approaches in performing (self-)evaluation -- each has pros and cons, but we will attempt to meet the above goals in structuring them. ##Approach A: Logbook questionnaire## The logbook is an idea for a Lulu.com printed book to bring on field mapping missions for [balloon mapping](/tool/balloon-mapping). Although this strategy can be reductive, compared to interviews, videos, etc, its standard approach yields data which we can graph, analyze and publish for public use. The results will be published here periodically. Any member of our community may use them for fundraising, outreach, or for example to print & carry to the beach to improve mapping technique. Read more at the [Logbook](/wiki/logbook) page. A mini version of this questionnaire was used by Jen Hudon as part of her [Grassroots Newark](/wiki/grassroots-newark) project and can be found here: * [Draft questionnaire PDF](/sites/default/files/grassroots-mapping-questionnaire-draft-1.pdf) ##Approach B: Community Blog## The community blog represents a way for members of our community to ... critical as well as positive... To contribute to the community blog, visit the [Community Blog page](/wiki/community-blog) ##Approach C: Interviews## We're beginning a series of journalistic/narrative interviews with residents of the communities we work with. Read more at the [interviews page](/wiki/interviews)....

Author Comment Last activity Moderation
liz "Would it be possible to filter out powertags? Powertags are all the ones with the : in the middle. This would remove all lat:0 and long:0, which wo..." | Read more » about 4 years ago
warren " And linking this forward to the 2019 Public Lab Software Community Survey, which we ran for some months; here is the report: https://publiclab.or..." | Read more » about 5 years ago
Mailan2021 " Thank you for sharing this exciting resource with me and the world. Wish you always luck. happy wheels " | Read more » over 5 years ago
bsugar "Sorry @warren! Looks like this may have been addressed in the github conversation. However, for those that come after, given the goals which are ..." | Read more » over 5 years ago
warren "@bsugar i had a quick question -- would it be OK to only collect /some/ of the related tags of each tag, i.e. limit the number of edges that we loo..." | Read more » over 5 years ago
warren "@bsugar added some really excellent research on finding associated tags that we can incorporate into the API planning. Just copying in here to keep..." | Read more » over 5 years ago
warren "Just a note that Open Demographics, Mozilla/diversity, and others are doing great work on standardized questions... Including this post on age/gen..." | Read more » over 6 years ago
warren "Here is a really great effort to come up with refined demographic survey language -- OpenDemographics: https://drnikki.github.io/sphinx-ghpages/ I..." | Read more » over 6 years ago
bsugar "Follow up: @sagarpreet, in exploring export options for a different project, I just discovered that you can extract the visual attributes (i.e. col..." | Read more » over 6 years ago
warren "There's a lot of related information and testing/discussion/research on moderation at CivilServant.io: https://civilservant.io/ " | Read more » over 6 years ago
bsugar "1) If 2 tags belong to same node , they have an edge between them ? The tags don't belong to nodes, the nodes are actually the tags themselves. E..." | Read more » over 6 years ago
warren "<3<3<3 " | Read more » over 6 years ago
liz "Look to http://sage.thesharps.us/ who emphasizes to people who install Codes of Conduct not to forget about the actual enforcement part once someth..." | Read more » over 6 years ago
warren "We ultimately posted this survey, thanks to everyone! https://github.com/publiclab/plots2/issues/1890 " | Read more » over 6 years ago
warren "We sent this version: https://docs.google.com/forms/d/1zkKp13BouJ9H8M1hzN9M81jcHi3i8KNklFwvuoqsxxw/prefill " | Read more » over 6 years ago
warren "OK -- we just sent it out to a group of about 8 people! @mollydb - did you happen to see this call? " | Read more » over 6 years ago
liz "Ah! That was an oversight, that there was no "check all that apply" -- do you think we should add to what's there or instead remove "multi-ethnic" ..." | Read more » over 6 years ago
warren "Quick additional questions -- for ethnicity, was there an important reason the Community Survey doesn't allow "check all"? (I usually check more th..." | Read more » over 6 years ago
warren "OK, so I was able to work almost verbatim from the diversity and participation sections from the 2017 Community Survey -- i left a comment about ad..." | Read more » over 6 years ago
warren "Great, thanks Liz - really helpful. The diversity section and the participation section (as you mentioned in chat a moment ago) would be super to i..." | Read more » over 6 years ago
liz "Thanks Jeff! There is also a need to coordinate this survey with the other surveys that are sent around Public Lab, such as other post-experience s..." | Read more » over 6 years ago