Pandemic politics highlight how predictions need to be transparent and humble to invite insight, not blame.
Andrea Saltelli, Gabriele Bammer, Isabelle Bruno, Erica Charters, Monica Di Fiore, Emmanuel Didier, Wendy Nelson Espeland, John Kay, Samuele Lo Piano, Deborah Mayo, Roger Pielke Jr, Tommaso Portaluri, Theodore M. Porter, Arnald Puy, Ismael Rafols, Jerome R. Ravetz, Erik Reinert, Daniel Sarewitz, Philip B. Stark, Andrew Stirling, Jeroen van der Sluijs & Paolo Vineis
The COVID-19 pandemic illustrates perfectly how the operation of science changes when questions of urgency, stakes, values and uncertainty collide — in the ‘post-normal’ regime.
Well before the coronavirus pandemic, statisticians were debating how to prevent malpractice such as p-hacking, particularly when it could influence policy1. Now, computer modelling is in the limelight, with politicians presenting their policies as dictated by ‘science’2. Yet there is no substantial aspect of this pandemic for which any researcher can currently provide precise, reliable numbers. Known unknowns include the prevalence and fatality and reproduction rates of the virus in populations. There are few estimates of the number of asymptomatic infections, and they are highly variable. We know even less about the seasonality of infections and how immunity works, not to mention the impact of social-distancing interventions in diverse, complex societies.
Mathematical models produce highly uncertain numbers that predict future infections, hospitalizations and deaths under various scenarios. Rather than using models to inform their understanding, political rivals often brandish them to support predetermined agendas. To make sure predictions do not become adjuncts to a political cause, modellers, decision makers and citizens need to establish new social norms. Modellers must not be permitted to project more certainty than their models deserve; and politicians must not be allowed to offload accountability to models of their choosing2,3.
This is important because, when used appropriately, models serve society extremely well: perhaps the best known are those used in weather forecasting. These models have been honed by testing millions of forecasts against reality. So, too, have ways to communicate results to diverse users, from the Digital Marine Weather Dissemination System for ocean-going vessels to the hourly forecasts accumulated by weather.com. Picnickers, airline executives and fishers alike understand both that the modelling outputs are fundamentally uncertain, and how to factor the predictions into decisions.
Here we present a manifesto for best practices for responsible mathematical modelling. Many groups before us have described the best ways to apply modelling insights to policies, including for diseases4 (see also Supplementary information). We distil five simple principles to help society demand the quality it needs from modelling.
Mind the assumptions
Assess uncertainty and sensitivity. Models are often imported from other applications, ignoring how assumptions that are reasonable in one situation can become nonsensical in another. Models that work for civil nuclear risk might not adequately assess seismic risk. Another lapse occurs when models require input values for which there is no reliable information. For example, there is a model used in the United Kingdom to guide transport policy that depends on a guess for how many passengers will travel in each car three decades from now5.