Aggregating public and scientific opinion

Aggregating public and scientific opinion

Aggregating public and scientific opinion 150 150 David Loughran

Many of us love a good political horse race. The Associated Press, though, recently issued new guidelines for reporting on pre-election polling that seeks to deemphasize that type of election coverage. Among the AP’s key recommendations is that a single poll should never be the sole subject of a news story. A poll provides a snapshot of opinion at a particular point in time and must be viewed within the broader context of an election. This recommendation comes in the wake of the 2016 election in which many observers believe the news media reported Hillary Clinton to be a heavier favorite than a more systematic analysis of polling data and social and economic trends would have suggested.

The AP’s revised guidelines can be viewed as a cautionary tale about recency bias. Recency bias — the tendency to overweight recent evidence — affects all kinds of decision making, especially in environments like elections in which new evidence emerges daily.

At Praedicat we’re acutely aware of how recency bias can affect decision making in light of newly published scientific evidence (see, for example, my colleague Adam’s recent post on fish oil). While perhaps not quite as sensational as the latest polling results, a heavily-reported scientific finding from a single article can deeply influence how individuals and organizations perceive risk. This is why Praedicat aggregates the findings of entire scientific literatures, past and present, to provide a robust measure of how scientific opinion is evolving.

A number of media outlets have done this same kind of aggregation with polling data during the last several election cycles. (now part of ABC News) is perhaps best known for this type of polling aggregation in which metadata such as poll outcome, polling method, sample size, and a polling organization’s historical performance relative to actual election outcomes (a measure of quality) are aggregated to produce an estimate of public opinion at a point in time and a probability forecast of the eventual election outcome. While not an election outcome, here’s a link to FiveThirtyEight’s forecast of President Trump’s approval rating at the time of the upcoming midterm elections in November based upon polling data collected since his inauguration.

Praedicat’s aggregation of scientific opinion works in similar fashion. Each individual scientific article testing a hypothesis of injury attributable to some commercial activity is like an individual pre-election poll. Article metadata such as the study outcome, the study’s methods, and what journal published the study (a measure of quality) are then aggregated to generate an estimate of scientific opinion at a point in time and probabilistic forecast of how that opinion could evolve in the future. When scientific data are thin (as are polls of state and local elections) the forecast typically admits of a greater range of possibilities than for hypotheses that have been studied extensively (as with national elections).

So the next time you find yourself caught up in the latest poll result, you might want to check in with a polling aggregator to understand how that poll result fits into the larger picture. And if a scientific article hits you hard one day, we recommend you check in with Praedicat’s aggregation of scientific opinion before you make any consequential decisions.

“A good pre-election poll can provide solid insight into what voters are thinking. In the heat of a campaign, that’s why they are so often intoxicating for journalists, for campaign staffers and, yes, for candidates, too,” said David Scott, AP’s deputy managing editor for operations. “But the 2016 election was a reminder that polls aren’t perfect. They’re unquestionably a piece of the story, but never the whole story.”