Embracing uncertainty in science

Photo courtesy of Mark Menzies
Photo courtesy of Mark Menzies

By Heather Douglas, Waterloo Chair in Science and Society, University of Waterloo

Heather contributed this article as part of our call for blogposts on conference themes. Submit your blogpost: http://bit.ly/1yyo1P2

Here at the Science Advice to Governments Conference, the reality of uncertainty in science, and science advising, is a recurring theme. That science does not provide certainty is readily acknowledged, but with a certain level of chagrin. The chagrin arises both from a desire to give pure science advice (just the settled facts) without political entanglements and from the desire of policy-makers and the public to get permanently stable facts on which to build decisions.

But uncertainty in science is unavoidable. Rather than see this as an unfortunate state of affairs—just another complexity that it is difficult to communicate or another reason one’s advice gets ignored or contested—we need to embrace the upside of uncertainty.

Most importantly, uncertainty in science is one of the reasons science is the robust and reliable source of knowledge that it is. It is because science is uncertain that any particular scientific claim or theory can be challenged by new evidence. And it is because new evidence can challenge any particular claim that a) science is an exciting thing to do and b) science is a reliable source of knowledge.

Science is reliable (generally) because scientists are out there looking for new evidence, testing theories in new ways, challenging each other to do better—and they are doing all this because science is not certain. So, we should embrace the uncertainty in science.  It is a source of science’s strength, not a weakness.

But there are also implications of the uncertainty in science that should be addressed.  The uncertainty in science also means that science cannot and should not be value-free.

It is the problem of assessing evidential sufficiency that makes value-free science thoroughly elusive.  In the scientific mode, we all want to have beliefs based on evidence.  But how much evidence is enough?  When is the evidence sufficient for us to accept a belief?

This is a core question for the practice of science advice. How to answer it? Try as I might I have thus far found only one principled solution: weigh the risks of getting it wrong (accepting a false claim or rejecting a true one) and the amount of uncertainty you have and make the best choice you can. But note that this is necessarily a value-laden choice. We have to decide that a particular false positive/false negative tradeoff is acceptable, and doing so depends on what we care about, what we value.

When giving advice, one can’t just state the facts. One has to decide which claims are sufficiently established to count as facts. One has to decide which pieces of evidence are sufficiently established to be considered reliable. One has to decide which aspects of science are sufficiently supported that they cannot be ignored and which aspects are not so important.  Lots of judgment is involved. But such judgments can be done with integrity (with proper respect for the evidence) and objectivity (which cultivates shared bases for trust).

So, the uncertainty in science opens the door to values in science. Value-laden science can still be done with integrity and objectivity. But with values in science, the need for democratic accountability in science arises.

What this means for the modalities of science advice is that there is a legitimate role for the public in science advice and there are legitimate concerns about democratic acceptability and accountability in science advice. Happily, there are lots of different ways to address these concerns. Here are some of the options:

  1. For informal advice, have a science advisor who shares the values with the politician s/he advises.  The politician was elected in part (we hope) because of their values, so advice given with these values in mind will be democratically accountable through the elected politician.  Sharing such values does not mean the values should replace the evidence, and politicians still should not ask for advice that merely supports a pre-determined outcome. But when assessing evidential sufficiency, the advisor and advisee can be assured they would share similar judgments.
  2. When formal advice is given, have the science advisors be clear about the values that were key in a piece of science advice. What kinds of trade-offs were important to the advisor in deciding on evidential sufficiency? State them upfront, so that the politicians know what is going on, and can make democratically responsible decisions.
  3. When written advice is given, make the advice, and the values that shape the advice, public.
  4. Bring the public into the process, allowing input on the framing of the issues and the assessments of evidential sufficiency.

This last one can sounds scary but, done well can generate lots of great results, from increased trust in the advise, increased legitimacy of the advise, increased public understanding of science, and increased social responsibility of science.   Good research is being done on how to do public engagement well.

In short, the uncertainty in science can seem like a thorn in the side of science advising, but we should keep in mind its more positive aspects:  that uncertainty underpins the excitement of research and the robustness of its results, and uncertainty provides a moment to bring the public into more sustained engagement with science.