New technology makes it possible to gather data about coronavirus in near real-time.
Rapid data generation increases risk around the accuracy and reliability of the data.
Decision-makers will maintain trust by adapting policies as new data emerges.
The world has not known, in living memory, a pandemic on the scale of what we are experiencing with COVID-19. Nor has the world had access to data and analysis, much of it being generated rapidly and disseminated freely, on the SARS-CoV-2 virus itself. Navigating a path out of this crisis will require effective integration of this data into decision making.
This is not an easy task at the best of times. It is even harder now because the virus causing the pandemic, SARS-CoV-2, is new to humans, having crossed the species barrier from bats. As little as four months ago, we could not answer even the most basic questions about the virus and the disease it causes – how transmissible is it, how virulent (damaging) is the disease to our bodies, whether we can mount an effective immune response. We are learning as we go.
We’ve been here before, most recently with coronaviruses that caused the Middle East Respiratory Syndrome (MERS) in 2012 and, before that, SARS in 2002. We knew just as little then as we do now about these diseases when they were first observed in humans. The difference between then and now is how fast we are learning the basic biology of the virus and the disease it causes, and how we are navigating the uncertainties along the way.
New technologies for rapid data generation and dissemination are making it possible to gather and analyze data about the virus in near real-time. Never before have we seen this much data generated and shared so quickly, sometimes at the cost of more uncertainty than we would like. But the speed and scale with which this virus spreads and evolves means that never before have so many needed this data so urgently.