stories from the Public Lab community
This is Part 1 of an ongoing series of case studies by Gretchen Gerhrke, Public Lab's Data Advocate, highlighting different stories of environmental data's use in the Public Lab community. You can find the entire series here.
Above, a false-color composite image incorporating near-infrared and visible-light photographs of a section of the Gowanus Canal, by the Gowanus Low-Altitude Mapping group.
Gowanus Canal in Brooklyn, New York has been an industrial hub since the nineteenth century. Historically, canal frontage has been used for coal yards, ship yards, dry docks, manufactured gas plants, and a variety of other industrial activities . Due to historic and ongoing industrial contamination and sewage overflows, the U.S. Environmental Protection Agency declared the Gowanus Canal a Superfund site in March 2010 . In addition to direct industrial discharges and raw sewage, stormwater runoff constitutes a significant ongoing source of contamination to Gowanus Canal, transporting metals, industrial chemicals, oils, and sediments from industrial surface yards into the Gowanus.
Riverkeeper is a member-supported watchdog organization dedicated to protecting the Hudson River, and is one of New York’s foremost clean water advocates. Riverkeeper’s boat captain John Lipscomb, and his assistant Neale Gulley, routinely patrol the Gowanus Canal to document environmental conditions and spot polluters, and Riverkeeper also maintains a hotline and website for R the public to report pollution . When Riverkeeper identifies pollution problems in New York City waters, they often work with Super Law Group, LLC, an environmental law firm with deep expertise in the Clean Water Act that specializes in representing individual citizens and non-profit groups .
In 2012, following boat patrols and public complaints about stormwater pollution in the Gowanus Canal, a team of researchers and lawyers from Riverkeeper and Super Law Group launched a deeper investigation of local sources of stormwater pollution. The team used a combination of Google Maps and the New York City Digital Tax Map to obtain basic information about industrial companies along the canal, but they needed higher resolution images to provide information about actual operations.
A DIY near-infrared map of a section of the Gowanus Canal from July 31, 2011. View on MapKnitter
During an internet search, one member of the team, Edan Rotenberg of Super Law Group, was intrigued to discover infrared images of the Gowanus Canal, captured using a balloon-rigged infrared camera, posted on the Public Lab website . Edan contacted Public Lab’s Director of Community Development, Liz Barry, who connected Edan with Eymund Diegel. Eymund is a lead community researcher on the Gowanus Canal, working with Proteus Gowanus and the Gowanus Canal Conservancy, and serves on the Board of Directors of Public Lab. The Gowanus Canal Conservancy and Public Lab collaborate on a project called Gowanus Low Altitude Mapping (GLAM) . Through GLAM, Eymund and other community members have taken hundreds of high-resolution aerial photographs using balloon and kite cameras. Eymund shared these photographs with Edan.
Riverkeeper and Super Law Group sorted the GLAM aerial photographs, looking for clear photographs of industrial plots on the shoreline. In the low-altitude shoreline photographs, the team looked for evidence of faulty equipment or practices that broke permit regulations, such as broken fences that allowed debris to enter the canal, unauthorized open pits, and direct runoff from impermeable paved surfaces into the canal. In a few instances, Riverkeeper and Super Law Group approached companies that were implicated in the GLAM photographs and the companies responded by voluntarily cleaning up their operations without a lawsuit being filed. In another instance, Edan actually showed a company the images captured by GLAM, which pressured the company into compliance. In other cases, Riverkeeper and Super Law Group filed suit against the polluters, and used Public Lab community-collected aerial images to demonstrate that there was clear proof of pollution, which helped Riverkeeper reach settlements with those polluters that terminated the lawsuits quickly and brought those companies into prompt compliance with pollution laws.
While the Public Lab images were useful to this enforcement effort, there are additional features and collection strategies that would improve the utility of Public Lab aerial images for people interested in initiating similar efforts. A key feature to develop or include would be the automatic logging of date, time, location, and photographer in order to create an automatic start to the chain of custody for the image. The auto-logged location also would enable a person to utilize Public Lab images in other geospatial platforms, such as three-dimensional Geographic Information System (GIS) programs. Possibly the most noteworthy way to improve the utility of Public Lab images, based on Super Law Group’s experience, is to take more frequent images, creating a time series over the course of days, weeks, and months. Time series images are useful for legal proceedings to demonstrate consistent or repeated behaviors, or to demonstrate the progression of a problem. Thermal imagery is also extremely useful for detecting errant water inflows into a larger waterbody, including both stormwater runoff and groundwater discharges from seeps or pipes. These inflows can be difficult to discern with standard photography but often are a different temperature from the receiving water and therefore are distinguishable in thermal images. Near-infrared images can also be useful in identifying different source waters by imaging different algal communities. It was the GLAM infrared imagery that first attracted Edan to Public Lab resources, and he had hoped there would be a more extensive repository of infrared (and thermal) images. Edan postulates that environmental advocates and researchers nationwide would benefit tremendously from easier access to time-series visual and thermal aerial imagery.
Aerial photographs provide stakeholders and legislative decision-makers with compelling visual evidence. Aerial images can demonstrate the wide range of potential contamination pathways into a waterbody, and also remind people of the connectivity of the watershed. Thus, low-altitude high-resolution aerial images may be useful in promoting better environmental regulations and outcomes.
Follow related tags:
new-york-city gowanus-canal brooklyn blog
Check out this column by Chris Berdik in The Hechinger Report
Travis Haas, a New Orleans high school science teacher, says that ever since Hurricane Katrina, his students have endured so many lectures and lessons on the importance and vulnerability of Louisiana’s wetlands that many develop “wetlands fatigue” — rolling their eyes and tuning out faster than you can say “bayou.” Luckily, wetlands fatigue is nothing a little DIY civic science can’t cure. This past academic year, Haas and his students at New Orleans Center for Creative Arts, an arts-focused high school, worked with the nonprofit Public Lab to monitor the health of urban wetlands. The students also piloted a new hands-on coastal wetlands curriculum, which Public Lab created alongside its core mission — developing low-cost environmental sensors, aerial photography rigs and free, open-source mapping software to give everyone the scientific tools to be environmental stewards. “These activities got the kids talking about wetlands again, and talking about them in a deeper way than they ever had before,” said Haas, who also heard from parents that kids had come home eager to discuss Louisiana’s rapid wetland loss — about 75 square kilometers a year, according to the U.S. Geological Survey. Last fall, NOCCA students tromped out to polluted wetlands in New Orleans’ parks and along Lake Pontchartrain. In low-lying, easily flooded New Orleans, the loss of wetlands from development and pollution means more energy and money spent pumping away the storm water that healthy wetlands would have absorbed. The kids were led by Public Lab’s outreach manager Stevie Lewis, who regularly works with students to map these wetlands from the air, a project backed by the Environmental Protection Agency. The students’ equipment consisted of a point-and-shoot camera fixed inside a clear plastic soda bottle and rigged up for continuous shooting while hoisted high over the park by a giant, red helium balloon. While NOCCA’s artistic students enjoyed lofting the giant red balloon, they really loved using MapKnitter, Public Lab’s open-source software, to stitch together all those images into maps that could be compared to other maps made over time. “They nailed it,” Haas said. “They are fascinated by film and images. They think in that way.” The mapping excursions aren’t just field trips. Public Lab’s small staff has a lot of ground to cover. So they count on volunteers, both adults and students, to be true collaborators who make accurate maps that track the wetlands’ health. Two weeks ago, Lewis led another group of high-school students from the city’s lower ninth ward — one of the areas hardest hit by the Katrina flooding — who are fulfilling their schools’ public-service requirements with an environmental nonprofit called Groundwork New Orleans. “If those students hadn’t been there to help me, I would have had a hard time,” said Lewis. “Public Lab is about collaborative knowledge-building. So students learn, and we learn from working with them. Maybe they figure out a better way to set up the camera or put together the soda-bottle rig. “We want to get people thinking openly and creatively about how to solve problems,” she added. “And that’s what students do best.” In the spring of 2010, as British Petroleum’s disabled Deepwater Horizon drilling platform gushed oil into the Gulf of Mexico, the company convinced federal authorities to make the spill a no-fly zone, which meant no one could see the extent of the slick. In response, a group called Grassroots Mapping teamed up with environmental activists to fly camera-toting balloons and kites over the spill and make maps of the devastation, which were picked up widely by mainstream media. Later that year, the activists formed Public Lab to spread the gospel of DIY environmental science, via online illustrated tutorials, how-to videos and blog reports from citizen scientists around the world. There are now fifteen Public Lab chapters and a global online community. “We didn’t start off focused on formal education,” said Shannon Dosemagen, Public Lab’s executive director. “But informal education, through peer-to-peer learning, training and field work, is very much embedded in the work we do.” Indeed, plenty of educators have joined Public Lab’s online community, especially since 2011, when Public Lab started selling inexpensive kits and spare parts for balloon and kite mapping, infrared imaging equipment to monitor plant health and basic spectrometers to test water for pollutants. Teachers and staff at after-school programs and student “maker spaces” buy Public Lab kits in bulk, either for science labs or student participation in environmental restoration efforts, including the Los Angeles River and the Passaic River in New Jersey. “Putting together a tool helps you understand how it works, how data is collected and analyzed, and how you might be able to adapt it for your particular needs,” said Matthew Lippincott, Public Lab’s production director, based in Portland, Oregon. “It demystifies the scientific process so people can start taking control of environmental problems.” In fact, Lippincott’s tinkering indirectly led to the curriculum Public Lab piloted with Haas’s students this past spring. The short version of the story is that Lippincott wanted to modify a small camera made for a bare-bones computer called a Raspberry Pi so it could take infrared photos. Thriving plants with healthy amounts of chlorophyll reflect back more near infrared light, which digital cameras can detect but the human eye can’t. Most cameras have a built-in filter that blocks infrared, and that’s the gizmo Lippincott had to remove. “It was a pain in the butt,” he said. “I posted a Youtube tutorial where I took the camera apart under a microscope with a scalpel.” That tutorial was so popular that the Raspberry Pi’s makers decided to change their camera so it could be more easily modified for infrared. As a thank you to Lippincott, they asked what Public Lab initiative they could support through their foundation. The answer was easy. For years, educators who used Public Lab’s tools and tutorials had clamored for lesson plans to go with them. So, backed by the Raspberry Pi Foundation, Lippincott and Amanda Fisher, a curriculum developer for the Oregon Museum of Science and Industry, created four hands-on lesson plans about coastal wetlands, which they posted online and are now tweaking to meet Louisiana’s education standards. Students make clay models of wetlands and observe the impacts from simulated canal cutting, dredging, and subsidence from oil drilling operations. They also grow bean sprouts, coat some of them with oil, then modify a digital camera so they can take infrared images to compare stressed and healthy plants. Haas says asking students to remove the infrared filter from a camera is just as important as teaching them about wetlands or plant health. “A real strength of the curriculum is the DIY part,” he said. “Instead of seeing science as something that’s canned and error-free. It gives students a task that requires real strategy and problem solving skills. Creativity is a part of science that’s not often seen in secondary classrooms, but it’s a very important part of the process.”
This story was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Read more about blended learning.
Follow related tags:
Image above: DustDuino can help individuals with limited resources monitor PM10 and PM2.5 concentrations, indoors or outdoors.
Check out this article below by Willie Shubert (@willie) published in Scientific American! Willie and some of the people working on projects mentioned in this article were recently on OpenHour. Check it out if you missed it. It's the July 6th one on "Open Air Projects."
Dust in the Wind: How Data Visualization Can Help the Environment By Willie Shubert | July 15, 2015 |
The lidar instrument aboard the CALIPSO satellite sends out pulses of light that bounce off particles in the atmosphere and back to the satellite. It distinguishes dust from other particles based on optical properties.
Credits: NASA Goddard's Scientific Visualization Studio A recent study using NASA’s CALIPSO satellite described how wind and weather carry millions of tons of dust from the Sahara desert to the Amazon basin each year – bringing much-needed fertilizers like phosphorus to the Amazon’s depleted soils.
To bring this story to life, NASA Goddard’s Scientific Visualization team produced a video showing the path of the Saharan dust, which has been viewed half a million times. This story is notable because it relies on satellite technology and data to show how one ecosystem’s health is deeply interconnected with another ecosystem on the other side of the world.
Stunning data visualization like this one can go a long way to helping communicate scientific wonders to the wider world. But even more important than the technology driving the collection and analysis of this data is how the team presented its findings to the public – as a story. NASA’s CALIPSO data offers a model of how scientists, technologists and journalists can come together and make use of data to help us respond to this a slow-motion crisis like air pollution.
Being able to see the dust blowing in the wind has broad implications. Today, one in eight people in the world dies from exposure to air pollution, which includes dust. This stunning fact, issued by the World Health Organization last March, adds up to 7 million premature deaths per year. Air pollution is now the single largest environmental risk in the world, and it occurs both indoors and outdoors.
The WHO report, which more than doubles previous estimates, is based on improved exposure measurements including data collected from satellites, sensors and weather and air flow information. The information has been cross-tabulated with demographic information to reveal, for example, that if you are a low- to middle-income person living in China, your chances of dying an air pollution-related death skyrockets.
These shocking statistics are hardly news for people living in highly polluted areas, though in many of the most severely affected regions, governments are not eager to confirm the obvious. The availability of global scale particulate matter (dust) monitoring could change this dynamic in a way that we all can see.
In addition to the volume of satellite data generated by NASA, sensor technology that helps create personal pollution monitors is increasingly affordable and accessible. Projects like the Air Quality Egg, Speck and the DustDuino (with which I collaborate) are working to put tools to collect data from the ground in as many hands as possible. These low-cost devices are creating opportunities for citizen science to fill coverage gaps and testing this potential is a key part of our upcoming installation of DustDuino units in Sao Paulo, Brazil later this summer. Satellite data tend to paint in broad global strokes, but it’s often local details that inform and motivate decisions.
Satellites give us a global perspective. The official monitoring infrastructure, overseen by large institutions and governments, can measure ambient air at a very high resolution and modeling exposure over a large area. But they don’t see everything. The nascent field of sensor journalism helps citizen scientists and journalists fill in the gaps in monitoring networks, identifying human exposures and hot spots that are invisible to official infrastructure.
A DustDuino sensor installed in São Paulo, Brazil (Photo courtesy of Willie Shubert) As program officer of the Earth Journalism Network, I help give training and support to teams of data scientists, developers and environmental journalists around the world to incorporate this flood of new information and boost local environmental coverage. We have taken this approach because the skills that we need to communicate about slow-motion crises like air pollution and climate change require a combination of experts who can make sense of data and journalists who can prioritize and contextualize it for their readers.
Leveraging technologies, skills and expertise from satellites, sensors and communities alike, journalists, scientists and technologists need to work together to translate data into the knowledge needed to address environmental crises.
Willie Shubert is the Senior Project Coordinator for Internews' Earth Journalism Network. As a coordinator of a global network of environmental journalists, Willie helps make tools that enable people to connect with each other, find material support, and amplify their local stories to global audiences. In his previous position at National Geographic Magazine, he coordinated translation for the magazine's 32 local language partners. He holds a degree in Geography from Humboldt State University with concentrations in cartography, environmental economics, and Chinese Studies. Outside of work, he devotes his time to the development of a free school dedicated to community building through education and to collaborative mapping and audio projects. Follow on Twitter @WillieShubert
Follow related tags:
air-quality blog dustduino dust
More and more we're able to capture the data around us and send it to the Internet. But do we have control of our data? What if a corporation hasn't made a widget for what we want to capture? The goal of the Open Pipe Kit project is to democratize the Internet of Things.
The current Open Pipe Kit approach optimizes for the following:
There are three major parts of the Open Pipe Kit project:
1) The Open Pipe Kit Bakery is a form based User Interface application that generates a pipe running on a USB Thumb Drive. This is our proof of concept attempt at lowering the barrier to entry for nonprogrammers. The web app knows what sensor and database drivers there are by pulling data from the Open Pipe Kit packages site. Submit your own OPK Package there and you'll see it included in the OPK Bakery. Dooooo it. It's ok if things break.
2) The Open Pipe Kit Developer Standards that describe how Command Line Interfaces can be used in a modular way to pull data from sensors and push that data to databases. I've created an "experimental" category there which is starting to look kind of safe but there is still more discussion to be had around it how it plays with things like the Bakery which is just a prototype afterall.
3) The Pirateship disk image for Raspberry Pi, that when running, looks for a file on a USB Thumb Drive named
autorun.sh and launches it. This simplifies the Raspberry Pi experience into something a bit more like the Arduino experience but with all the power of Linux. The Pirateship Disk Image also includes the
pirateship Command Line Interface that has awesome little gems for connecting to WiFi networks. Add the command
pirateship adapter <wifi network name> WPA <wifi password> to a file named
autorunonce.sh on your USB drive and your Raspberry Pi will connect to your wifi network.
There are two questions, that if the answer is yes to both, then we're onto something.
1) The first question is "Do the OPK Standards make developers happy?" A group of about a dozen of us have talked about this a lot at the OPK Hangout calls on Thursday nights for the past six months and we seem pretty happy about this approach. We could use more voices but most importantly we could use you trying to make command line interfaces for sensors and sharing them back along with your experiences.
2) The second question is "Does the Open Pipe Kit bakery lower the barrier to collecting data with sensors?" The Bakery is the fourth iteration of our experiments with lowering the barrier to entry. While we have validated some aspects of the OPK Bakery with past prototypes, this thing is far from well tested.
I want pulling data from sensors and pushing it somewhere to be a solved problem. I want it to be boring like hammers and nails. I want hardware for doing this on the shelves of hardware stores. I want us all to have the ability to collect and control data in the Internet of Things.
Why? Because I think we can make the world a better place when we have a better understanding of our environment. Particularly in our ability to affect the productive capacity of small scale agriculture. I believe an increase in that productive capacity increases the resilience of our communities and their ability to forge their own future that would allow us to reject endless war and an exploitive financial system.
-- R.J. Steinert
Follow related tags:
kit blog monitoring environmental
Girl Scout Troop 5399 is going on a journey to learn about air. All aspects of air. To learn more about air, we decided to deploy a dust particle sensor - the Shinyei PPD42S - in downtown Brooklyn. All of the code, data and report can be found here on github.
To measure air quality in Downtown Brooklyn, we decided to take readings throughout the Metrotech area using the Shinyei Dust particle sensor. The scouts formed two groups and choose four different locations. At each location, the scouts documented the location and the time they stayed in the location as well as other miscellaneous information about the area (ie a park, there are cars, people smoking etc..). They recorded data for 5 minutes at each location before moving on to the next location.
Here is a map of the eight locations visited between the two groups.
The Shinyei is a low cost dust particle sensor which many is being used in many air quality projects such as Dustduino. For this experiment, we used an Arduino and the Adafruit Data Loggin shield which allows us to write to an SD card. The device was made mobile by a battery pack.
The code is the standard code which can be found in many places. One issue we had with the code is that the real-time clock did not properly write to the data file. I'm unsure what happened and didnt have time beforehand to debug.
The plots below were generated from data collected using a python script(found in github repo). In the first plot, generated by Group 1 data, there are peaks around 10:40am which correspond to collection point number 4. This site is a tunnel under two Metrotech in which there we several trucks unloading material. The first 30-40min collection period was taken from collection points 1-3 in which most of the environment was open and park-like.
The second plot, generated by Group 2, also shows a tremendous spike around 10:40am when the group was taking measurements at location 4. This location is the intersection of Flatbush and Tillary Street. The time period from 10:25-10:35 also shows high concentrations of particles which was taken at location 3 adjacent to a construction site while location 2 was inside the Metrotech commons.
The group would like to explore other locations such as rivers, other intersections and subway stations to compare results. Similarly, the scouts noted they should have documented more of the environment while they were taking the readings.
Follow related tags:
air-quality blog dust barnstar:basic
Since the very beginning of Public Lab, we've been aware that we're experimenting with new modes of production, and that the means of publication and communication we employ are keys to success. We started with a range of different "types" of content on the first Public Lab site, including:
Some of these have been merged -- Places and Tools are now just special wiki pages. Reports were merged into Research notes early on, as they were not well differentiated.
I've been thinking about, and discussing with other Public Labbers, a series of related challenges we face in the Research Note format, and wanted to talk through some of them here in my first post for the newly relaunched Public Lab Blog
One metaphor introduced by Seymour Papert (author of the 1980 book Mindstorms) in the context of educational technologies is the phrase "low floor, high ceiling" -- where a medium with a "low floor" assures a low barrier to entry, but the "high ceiling" simultaneously does not restrict an creator's ability to create complex, powerful works. Mitchel Resnick of the Lifelong Kindergarten group at MIT also argues for "wide walls" in his 2005 paper with Brian Silverman, "Some reflections on designing construction kits for kids" -- which is to say, accommodating a diversity of types of work.
First, let's discuss some of the challenges we've faced with the Research Note format:
Though we've worked hard to make it easy to post a research note, and to lower the technological and cultural barriers to doing so, relatively few of our community of thousands post them. In fact, only around 500 have been posted over five years -- and the rate of publication shown on our stats page is actually down a bit since one year ago.
Even the name "Research notes" -- so carefully chosen both to denote informality and to encourage and recognize anyone's contributions as "research" -- has been cited as intimidating: "Is what I'm doing really research?"
Even our most active members can go weeks or months without posting. Why is this? Is it a problem? Some are busy, sure, but others are actively working on PL projects, but haven't found the time to publish. Have we built up the idea of research notes too much, such that people feel they must be long and carefully crafted? Some folks may "save up" work until they think it's "ready" -- saving time by not pausing their process too often, and waiting until they have something more substantial to share, or they're more assured of the outcome. See, for example, this sampling of four fairly active posters, over the last 52 weeks (the shortest bars indicate a single post):
The research note posting interface has been carefully crafted to emphasize simplicity and clarity. But it's definitely not designed for longer-form work. Some on the organizers list and on Github have argued persuasively for a richer, more powerful editor that is simultaneously easier to use without knowing Markdown, the simple formatting system we use.
Many of the above issues seem to push us in different directions. What I'd like to explore is the possibility of a shorter research note format, perhaps as an alternative in parallel with a longer version.
But first, how do we know what exactly is needed?
I just want to take a step back here. We've often discussed how to make experiences richer and deeper, because we see the amazing work of our most involved members -- long, articulate posts by @cfastie, @hagitkeysar, and so many others. This is of course a good thing.
But what about everyone who didn't post? I'd like to focus on that hard-to-measure group who we didn't manage to entice into posting something. Basically -- selection bias: we don't have feedback or input from those who aren't participating. If participation were a pyramid, we're only measuring the top, most involved, and I'd like to look at the base -- the lurkers, the observers, and those who we could do better at engaging.
I believe there's a great deal of untapped potential there! Even if we count only the 5-8000 people subscribed to our various websites and lists, that's still over 10x the number of people who've ever posted a research note. And look at the numbers for how many people have posted at least twice, three times, and as many as eight times:
I'd also like to think more about how to better understand that group -- those who aren't posting. I want to think about how to structure a study or survey, for example, that can help us address selection bias and inform the design of our site in a more balanced way. Farm Hack, for example, ran an in-person user study with passers-by (non members) at national organic farming event.
Achieving longer form through serial posting of smaller pieces over time.
In thinking about how to reach people who are not yet posting, the idea of shorter, more regular posting is appealing. Rather than "all at once" posts, authors could share (as an example) just their question or background story in one post, their proposed experiment in another, their field test itself in another, and analysis and closing thoughts in a fourth post.
To be clear, what I'm proposing is not cutting down on content, just breaking it up into multiple pieces, and scaffolding that "shorter posts more often" pattern through our editor, which could remain simpler as a result. In fact, as each individual post would need less formatting, the basic posting form could just be plain, unformatted text, as a default. If people could spend more time inviting others into their work, and less time formatting their posts, that seems like a good thing to me.
One experiment we did which I think we may be drawing the wrong conclusions about is the Question and Answer function, which was a limited experiment to invite people to make short posts where they ask a question. Although the questions posted are a bit untidy and sometimes oddly formatted, if you look at the authorship of these posts, they're ALL first-time posters! As a format, we hadn't really thought of it as a success, but by this measure it certainly is.
In summary, although I definitely want to spend time "raising the ceiling," I think we should "lower the floor" as well, and set a goal for ourselves to increase the number of regular posters (i.e. at least one post per month) tenfold in the next year.
Follow related tags:
collaboration community website research