Public Lab Research note


Slowing Sensor Journalism

by michalnon | February 24, 2016 03:30 24 Feb 03:30 | #12729 | #12729

The front pages of the world’s major publications are increasingly filled with data visualizations. Graphs, charts, and maps with eye-catching displays and pleasing colors are popping up beside in-depth reports on education, medicine, and natural resources. In fact, groups like the New York Times and the Washington Post are even hiring their own team of data specialists, assigned only with creating these additions. While some of these visualizations rely on data provided from an external source, more and more journalists are collecting the information themselves. This is the emerging field of sensor journalism. Defined by scholar Lily Bui as the practice of collecting data from sensors, and then using those sensors to tell a story, the new trend puts the tools, not just the pen, in the hands of journalists.

In a presentation to a Data Visualization class, Bui provided several examples of successful feats in sensor journalism. The Associated Press’ report on air quality at the Beijing Olympics, for example, caught the attention of the world. Using their own air quality sensors to measure particulate matter, the data visualization told more than words ever could. The AP journalists countered air quality estimates provided by the government – a move that would have been impossible without data of their own. Another group at the smaller Sun Sentinel in Florida came up with the brilliant scheme to collect data from sensors at tollbooths, proving that a high percentage of police officers were speeding in non-emergency situations.

Despite the successes of some sensor journalism efforts, there are several complications stopping the process from truly taking off. Sensors must be calibrated and authenticated in order for their data to be counted on, and that is a status that is difficult to achieve or police. Often, it is not deliberate ignorance or intentionally skewed data to blame. Human error can occur even with the most advanced (and expensive) tools. Most newsrooms are reluctant to provide tens of thousands of dollars to a process that is new and unfamiliar, leaving many journalists untrained and stuck with less than advanced instruments.

A simple in-class experiment detecting the presence of salt in water samples proved how many factors could derail the findings of a simple test. Makeshift sensors produced a ringing sound when placed in cups of water and increased pitch based on conductivity. At first glance, the sensors provided exciting information. Dasani water, for example, produced a much lower pitch than a sample from the Charles River. The assumption was that Charles River water made a higher sound because it was dirtier, the Dasani being more pure. It was soon discovered, though, that Charles River samples produced a nearly identical sound to that of Evian water – an expensive bottled brand filled with minerals. The previous conclusion that contaminants equated to conductivity seemed much less obvious with this added information. Adding to the confusion even more were variations in the instruments and water samples that should theoretically produce the same results. Some tests had proven that the instrument actually influenced the conductivity reading as it tested. Other experiences found that the pitch changed based on how much water was in the sample cup, a factor that should have been irrelevant in design. On a more basic level, it was difficult to provide a quantitative read-out of frequency going only by sound. It was left up to individual interpretation where each sample placed on the “pitch scale.” This is a dangerous reliance to make. The next version of the tool will likely include a numerical readout, making the data collection much more valuable. This class caught some of the red flags in a simple experiment – but there were likely even more mistakes and factors that the untrained minds did not notice or think to look for. These instances happened with cheap equipment and untrained testers, but even expensive sensors and trained individuals can make mistakes making the data’s conclusions invalid. Human error is always present, and may be even more rampant when tools are given to people who do not know what to do.

Patrick Herron, the Deputy Director of the Mystic River Watershed Association, presented shocking images of backed up sewage drains, highly questionable clean up methods, and families swimming in unsafe water to a class of journalism students. This was coupled with graphs illustrating extremely high levels of toxins detected in the water supply. The short series of images and text presented a story that was immediately compelling. To a room full of aspiring journalists, it seemed instinctual that a large publication would have picked up on this reporting. But Herron explained that his organization did not have the resources to market their findings to papers like the Boston Globe, and felt that they were underprepared when it came to media relations. In cases like these, a sensor journalist could take two courses of action. Acting as a “one man band,” they might be trained to collect and interpret these water samples themselves, backing up narrative with statistics. A sensor journalist would know exactly how to approach civilians or experts and conduct an effective interview. They might also know how to make a graph like Herron’s that would strengthen their story multifold. A more complex outcome, though, could involve a better appreciation for the work that Herron and his team had done. A trained sensor journalist might be more aware of normal water contamination levels and could compare some scientific studies to samples of their own. This, in turn, could motivate them to reach out to scientific groups. A collaborative relationship might quickly lead to more publicity and more accuracy in widely circulated explanations of very complex topics.

The challenge that aspiring sensor journalists present is parallel to that of the “citizen journalist.” While new minds and methods can bring valuable perspective and ideas, untrained actors may promote the spread of misinformation. "Journalist" is not a protected title and there is no reason to expect that it will become one anytime soon. Journalists could attend countless trainings to educate themselves about calibration methods or ethical models, but there is no system in place to reliably distinguish between these folks and someone with no experience or education. It is unlikely that a reader will delve into the professional background of a reporter, making it easy to shape sensor journalism data in a shocking or “clickable” manner. There are virtually no reliable checks in place to keep a story that seems compelling from being rapidly shared through social media. The cultural attitude surrounding science reporting among professional journalists must also be addressed – it is not just the collection of data that holds the possibility of inaccuracy.

Journalists are increasing their adeptness when it comes to analyzing the meaning and implication of a new study, but significant and detrimental errors do happen in the newsroom. Last May, science journalist John Bohannon proved just how easy it is to scam the system by presenting official-looking data that suggested eating chocolate could help people lose weight. According to NPR, the study was in fact real – and Bohannon really does have a PhD. However, the experiment contained too few subjects, lacked a peer review, and did not account for random factors that could have impacted the outcome. Perhaps most astonishing, few (if any) major publications asked an outside expert to review the study’s findings before hitting publish on the post. Bohannon railed against journalists on the science beat, proclaiming in an NPR interview: "For far too long, the people who cover this beat have treated it like gossip, echoing whatever they find in press releases. Hopefully, our little experiment will make reporters and readers alike more skeptical," he said. It is this tendency to overlook important details or to acknowledge factors that may have seriously impacted the findings of the study that makes the rise of sensor journalism somewhat concerning. If reporters are unable to spot these problems in other studies, it is foolish to suggest they would be any better at noticing flaws in their self-designed scientific experiments.

Ultimately, the excitement surrounding the emerging field of sensor journalism needs to be slowed. Success stories in the AP or New York Times are exciting, but they are far too rare. Basic flaws in the system are too extreme to ignore. A cultural attitude toward science reporting still emphasizes it as “gossip” as Bohannon explained. A lack of resources and skepticism in newsrooms is leading to a limited allocation of supplies; meaning sensors are often cheap and unreliable. Even supposedly expensive instruments may not be calibrated or are incorrectly used by journalists who are untrained – all the while going undetected by the consumer or reader of the report. Sensor journalism is an intriguing and important concept. Changes in reporting philosophy and a better investment in the products and process could bring the practice from immaturity to establishment. Sensor journalists should keep forging the way in the mean time, but readers and editors alike should take all pieces reliant on self-collected data with a grain of salt.


0 Comments

Login to comment.