By Sir Peter Gluckman, New Zealand’s Chief Science Advisor
This is the first in a series of articles about science advice published on The Guardian.
No one doubts that the challenges citizens and their governments face require decisions to be informed by objective knowledge. Public and media attention tends to focus on “grand challenges”, such as climate change and food security, or on urgent issues such as pandemics or natural disasters. But in everyday policymaking, on matters ranging from transport to social welfare, governments face complex trade-offs and science has a role to play.
In recent years, the need for a stronger nexus between the worlds of science and policy has been identified, but there is still much to learn about how to translate effectively between these two worlds. The recent controversy over the role of the chief science adviser to the president of European Commissionis a case in point. How much of this debate is about science and how much about values-based advocacy and the realities of policymaking on complex or contentious issues?
These are the sorts of questions that we will be asking at this week’s conference on science advice to governments in Auckland. One important distinction is required from the outset: science advice for the purpose of enhancing public policy is not the same as advice on the management of the public science system itself. Yet it is inevitable that these get intertwined, particularly by those in the science community. So while advice to governments about how best to invest in R&D or structure a science system is critical, this is quite distinct from science advice for policy, in which there is often high public interest in the questions for which governments need evidence.
That public interest may present itself as advocacy for a particular course of action, but with little consideration of the complex context of decision-making. In such situations, science itself can often be coopted and indeed misused as a proxy for the real debate that should be taking place – one that hinges on social values, not science.
For instance, whether deliberately or not, we often find ourselves arguing about the science of climate change rather than discussing the issues of intergenerational economic equity and long-term sustainability that fuel the climate debate. Similarly, it seems easier (or strategic for some?) to continue to debate the safety of GM foods, rather than to consider the values that underlie the GM controversy – that is, what we consider “natural” and perceptions of access (or lack thereof) to the benefits of new knowledge.
Science inevitably gets drawn into such matters, not only because scientific processes are the only way we have to gain relatively reliable knowledge about the world around and within us, but also because increasingly refined scientific tools and methods are better able to address the complex questions that preoccupy citizens and their governments.
Yet science, by definition, is never complete. There are always uncertainties and gaps between what is known, and the need for policymakers to act decisively in the absence of total knowledge. Some level of risk is involved in all decision-making, and without it, we are left with inertia. What is important is to be clear about where science-based discussions stop and values-based discussions start.
We need to identify the purpose and style of science advice. Policymaking needs to take account of considerations ranging from fiscal priorities to diplomatic objectives and, particularly, the electoral contract between governments and citizens. These domains are largely values-laden and values-based. The business of government is largely about managing the trade-offs between multiple options. The role of science advice is to provide interpretation and translation of what is known, and what is not known, and to communicate across the cultural divide between science and policy.
How should that brokering be done and what principles should underpin it? Increasingly governments are appointing science advisers and science advisory panels, or turning to national academies for inputs into the policy process. These different models are not mutually exclusive and the diversity of approaches reflects differing cultures of public reason across jurisdictions, and the different types of issues on which advice is sought.
In my view, a vital point has been lost in recent debates over the relative merits of individual science advisers, standing committees or academies: the very different nature of formal and informal advice.
For instance, where an issue is technically complex and protracted, governments have long turned to advisory panels or academies for structured advice. However where an issue is urgent, such as in a crisis, academy and committee structures are often impractical, and it is sensible to turn to an adviser who can reach out to the science community and assemble the relevant experts in a less formalised way.
Informal methods also play a role in the everyday business of policymaking. For academic purposes, this is often presented as a formal cycle of problem identification, data collection, analysis, option development, political decision making and monitoring. In practice, however, it is much fuzzier than that.
In my role as the first chief science adviser to the prime ,inister of New Zealand, I find that much of my time is spent not in preparing formal reports (though there have been a good number of these, for which I have convened panels of experts). Rather, I contribute by informally offering advice at the earliest (and most opportune) stage, when officials or ministers are exploring whether to formally work up a policy, or in the multiple incremental decisions that are made daily. These actions do not generate headlines but are central to the operation of government.
It is often through rapid, opportune and informal conversations that a science adviser can nudge decision-makers towards a scientifically robust path. I suspect this informal component of the science advisory system that is the least analysed. But, provided the adviser’s independence is protected from political influence, this may well be among the most important.
The currency of science advice is also trust. Robust knowledge is the product of a range of internationally accepted research methodologies, but the production of the trust necessary to mobilise that knowledge has less obvious algorithms behind it. Indeed, ours has been described as a “post-trust” society, where social media can be used by advocates of a particular course of action to undermine trust in an individual, panel or regulator, regardless of their scientific merit, and wider checks and balances in place.
The recent debate surrounding Anne Glover’s role as chief scientific adviser to the president of the European Commission is interesting, because it demonstrates how easily trust can be granted or withdrawn depending on context. The role was called into question by a group of NGOs unhappy with the notion that science might show that GMOs could be anything other than unsafe, who took aim at the adviser as a result. Yet these same groups might well show strong support for the same adviser when she carries a message about the need to take action on climate change. Here, values and biases compete with knowledge, and it is essential that the role of the adviser be understood and protected as a knowledge broker.
Over the past two decades, we have experienced a revolution in the relationship between science and society. Citizens are much more aware of the ways in which science permeates our everyday lives and there is an unprecedented and flourishing public engagement with science – from deliberative public input into the setting of national science priorities, to accessing the results of publicly funded research. There is also greater public awareness about how science can be misused, and scientific uncertainty exploited. This has contributed to an atmosphere of public mistrust that is sometimes justified, and sometimes unjustified. The challenge is to understand the difference, which is a long-term project that must enlist educators, the media and the science community themselves.
Science informs but does not define policy decisions. Other considerations nearly always come into play. Some think that the failure of science to dominate is a failure of the science advisory system. More often, I suspect it reflects a misunderstanding of what science can do in policy contexts. Science is a knowledge base on which these other dimensions of policy decisions interact and are overlaid. The challenge is to ensure that the knowledge base is not corrupted in the process, and to assist governments in making the best use of it.
For two days at the end of August, senior practitioners of science advice, academics and communication experts from 45 countries will converge on Auckland, New Zealand for two days of intensive discussions. Delegates will consider the potential for strengthening science advice to governments in a variety of challenging policy contexts. Over the coming days, some of the voices at that conference will share their thoughts here. I look forward to the growing conversation.