Evidence doesn’t come from nowhere – Reflections on ISSP’s latest panel

Research
Institute for Science Society and Policy

By Stewart Fast

Senior Research Associate, Research Director, Institute for Science Society and Policy

Tabaret lawn
ISSP’s series of panels on the new federal government’s commitment to evidence-based decision-making have stressed that scientific advice is an essential basis for good policy decisions.

There is a recognition, of course, that science advice cannot determine policy. There are other factors and tradeoffs that enter into decision making. Public policy undoubtedly requires politics. Yet, I am interested in a different aspect touched on by speakers from the latest panel - what is needed to ensure that scientific evidence remains credible?

To ask this kind of question you first have to think that scientific evidence may be less than credible. For someone like me who was initially trained as a biologist, this is not immediately obvious. I was trained to trust and celebrate the scientific method as a means of generating credible, replicable, verifiable true knowledge that is then handed over to bureaucrats to implement. I think this view is held at least implicitly by many scientists. My own “ah ha” moment came after reading the political ecologist Piers Blaikie in grad school. He wrote persuasively of how scientists’ reliance on the slope and gradient variables of the universal soil loss equation led to a misdiagnosis of appropriate measures to mitigate erosion in an African context where rain splash is (and was known by farmers) to be a more important factor. Eventually, the scientists got it right, but the important point is that there were unrecognized cultural and institutional biases that limited the capacity to recognize evidence and generate accurate, credible scientific knowledge.

In other words, evidence doesn’t come from nowhere. It is produced in a specific context with incentives surrounding the type of questions that are asked, choice of what to observe, explanatory models used, and so on. Twenty years after the “science wars,” there is a wider critical academic appreciation for the social and political contingencies of scientific knowledge creation, but how far does that extend beyond the academy? And what does that mean for science advice and evidence-based decision making?

Each of the speakers touched on measures to ensure science advice is credible. I’d like to highlight two sets of comments in particular.

Professor Heather Douglas ISSP Fellow and Waterloo Chair in Science and Society at the University of Waterloo, stressed two forms of public accountability in science advice. The first and most obvious is accountability to diverse social values, which can be aided by things like ensuring diverse representation of expertise in formal science advice structures (e.g., committees, panels). The second is accountability to the evidence itself. Has all relevant evidence been considered? What evidence would change experts’ minds? In formal reports this includes identifying and recognizing alternative explanations and explaining why they were rejected. While this may strike many as a tall order, I think there are examples of public reports that have some of these qualities. For example, Health Canada’s public report of their preliminary findings of a study of the potential health impacts of wind turbine noise .

Dr. Nigel Cameron, ISSP’s Visiting Fulbright Research Chair in Science and Society and Founding President of the Center for Policy on Emerging Technologies, stated that scientists should not act as advocates if they wish to affect policy. He felt climate scientists were occasionally guilty of this, and later during the discussion period, gave another example of exaggerated claims for the benefits of stem cell research by health researchers as another instance which harmed the credibility of science. These are fair critiques but I wonder if this type of behaviour is a by-product of funding patterns. As funding of Canadian science increasingly emphasizes meeting industry needs (e.g. NSERC’s 2020 strategic plan ; transformation of the National Research Council mandate), it seems to me likely that there will inevitably be a greater push towards advertising and advocating the potential commercial and societal benefits of research proposals and new research areas.

These two criteria of public accountability and of refraining from public advocacy are interesting. They suggest that scientists, on the one hand, have to engage more with public critiques of their methods and findings, and on the other hand, must be careful not to advocate for any specific uptake of their research. That is a tight corner to occupy.