As part of our ongoing events series with The Philosopher, Peter Vickers, Professor of Philosophy at Durham University, UK, sat down with Jana Bacevic to discuss his new book, Identifying Future-Proof Science (2022). Over their wide-ranging conversation, they discuss the meaning of certainty, reaching consensus, the history of science, and much more. Below is a transcript of their conversation, which has been lightly edited for clarity.

Learn more about our Fall 2022 philosophy event series.


Anthony Morgan: Hi there, and thanks so much for joining today’s event. This is the latest in the Philosophy Today event series, cohosted by the Philosopher and Boston Review.

The context for today’s event is the publication of Peter Vicker’s new book Identifying Future-Proof Science (2022). The book asks one of the most fundamental questions in philosophy of science: Is science getting at the truth? One of the tactics of skeptics about whether science does get at the truth is that those scientists who are “sure in the past,” ended up being wrong. So in the event today, Peter will try to defend science against this form of skepticism, and will furthermore argue that we can comfortably identify many scientific claims that are future-proof, in other words, they will last forever so long as science continues. Peter Vickers is joined in conversation today by Jana Bacevic, a colleague of his at Durham University.

Knowledge is linked to communities; all of us accept that there are at least certain scientific facts that are established.

Jana Bacevic: Great. So the topic of Peter’s book addresses something that has a lot to do with the way in which humanities, including philosophy, and in particular philosophy of science, have been engaging the society. And that’s how we can know that scientific knowledge claims are true, correct, and then, most importantly, judging by the title of Peter’s book, reliable or more likely to hold true for a period in the future. So, Peter, before we start, I’m just going to ask you to give us a very short summary of your main argument in the book.

 

Peter Vickers: Well, thank you. Yes, I’ll try to be brief. Obviously there’s a lot in there, but I mean, what motivated me perhaps more than anything was the fact that philosophy of science has traditionally presented itself as split on the science and truth question between realists on the one hand, and anti-realists on the other. And it sort of projected itself to outsiders as largely divided on the science and truth question, or at least realists were at most 70 percent, with anti-realists 30 percent. So still a pretty split path. The more I’ve investigated my own community, the more I’ve come to realize that actually, all of us, or nearly all of us, accept that there are at least certain scientific facts that were established, even the so called anti-realists, because anti-realism usually has various caveats, and conditions, and so on.

I thought that what we needed to do was present that consensus of opinion from within our community. Although there is this realism, anti-realism debate, where we look split on the science and truth question, all of us are pro-science in the sense that, for example, hardly anyone in the community doubts climate change. Hardly anyone in the community doubts that viruses exist and cause diseases like COVID, and many other examples as well. I think that we just don’t say that enough, that there’s this pro-science consensus within the philosophy of science community. So the book’s trying to say, look, this exists. It starts in Chapter 2 with trying to explain this other debate, this realism/anti-realism debate where we look largely divided, and then it goes on to try and argue that when it comes to the existence of at least some established scientific facts, we are largely in agreement—even if we were just to list the facts differently. It tries to explain why that’s a reasonable position, even though in the history of science, we’ve had many scientific revolutions. So, the idea is to substantiate this claim in light of the big lessons from the history of science, which on the face of it tell us to be very humble about what we could claim to actually know for sure.

 

JB: That’s great, because that brings me to the first substantive question I had, which is, what do you see as the value of certainty? I mean, especially as of recently, a lot of people have argued in favor of epistemological humility. This includes a forum in Boston Review that I have, alongside a lot of other people, contributed to, which was led by Sheila Jasanoff. Obviously a lot of this argument, at least to me, feels like it comes from the perspective that we recognize that scientific claims about certainty have not only been at certain occasions slightly embarrassing for us, or those of who also identify as scientists, but mainly that they have turned out to be wrong. If you want to, you can perhaps talk a bit about several examples that you do engage with that fall within the category. They often create this artificial rift between scientists, academics, or people who have some sort of epistemic privilege, and the so-called lay people or lay knowers. We can come across as patronizing if we say “well, of course we all agree that climate change is real and is happening” or “of course we all know that this is the case.” Then why is that other people do not, or why is it that it is so difficult to convince others?

Would you say that perhaps certainty is also, in a manner of speaking, a communicative strategy, or perhaps even a public relations or public engagement strategy? In other words, how do you see the value of certainty, both for the scientific and philosophical community, and obviously for those we supposedly claim or aim to engage?

 

PV: I’ll start with the certainty question. So many people react to this by saying, there’s no certainty in science; everything’s fallible, everything’s a theory, and we could have more or less good reasons. The only certainty is in geometry or mathematics. That’s usually the claim. So, given a certain puzzle in geometry, you can show, you can prove with some sort of rigorous proof, that such and such is the case. You never get that in science. And so I’m not using certainty in that strong sense, because then nothing would be certain in science. It also wouldn’t be certain that World War II even happened if you want proof in that really strong sense that you get in math and geometry.

Now, there’s another sense of certainty which is much more common, which is just that something is far beyond reasonable doubt. Just take the claim that dinosaurs roamed the Earth millions of years ago. This was once a speculation or a hypothesis. Then it became a theory. At some point it became beyond reasonable doubt. I think you could say that it was already beyond reasonable doubt 100 years ago. But today in 2022, I would describe that as far beyond reasonable doubt, while still not proven, of course, in the sense of mathematics. So I’m using the word certainty as more or less a pseudonym for far beyond reasonable doubt. I think there’s this epistemic space to be explored there, which is short of certainty in the strong sense. But it goes way beyond saying this is just a good theory or this is our front runner theory for now. I think any anti-realist in the philosophy of science would agree that dinosaurs once roamed the Earth, that they would say, “Well, I’m not an anti-realist about that kind of case. I’m anti-realist about genes or electrons.” But I’m specifically not thinking about something that banal, simply that dinosaurs once did exist. So that’s the sense of certainty I have in mind.

Philosophy of science has traditionally presented itself as split on the science and truth question between realists on the one hand, and anti-realists on the other.

I think what’s useful about it is that it just hasn’t been said and it hasn’t been seen as professionally acceptable to explore this space. I think it’s just an epistemic space that hasn’t been explored enough. And as I said earlier, what’s been preferred is this realism/anti-realism debate where we present ourselves as two sides. There’s a complicated story to tell there about why we ended up going down that line.

Karl Popper is still a great influence in many circles. When we teach Popper in philosophy of science, we obviously teach the virtues of his philosophy, but also, as we say it, its many vices. Many parts of his philosophy are considered quite strange today. For example, Popper once said that evolutionary theory wasn’t science. It was a metaphysical research program. I think those were the words that he used, because it didn’t fit in with his philosophy of science. Obviously, to scientists working in evolutionary theory, that was outrageous. It obviously to them was science: they were scientists, and they were studying it. There were other areas as well of Popper’s philosophy—one was that we can never say that something is approaching the truth, or we’re approaching the truth. Everything was always an idea to attempt to falsify. But then some of these examples show fairly clearly that that is either the wrong idea, or at least a fringe idea. If that were your view, then you would still be trying to falsify the idea that dinosaurs once roamed the Earth, or even that the Earth turns on its axis. People forget that that was a hypothesis. It became a solid theory; at some point, it became beyond reasonable doubt; and then eventually it became far beyond reasonable doubt. Today we would call that certain: the Earth spins on its axis. That’s a result that has been established by scientific labor. But if you’re a strict Popperian, you’d be saying that we should still be trying to falsify this. In a case like that, it seems absurd.

That’s the background history of philosophy of science that I’m reacting to. It is also a reaction to Thomas Kuhn as well. Kuhn’s book, the Structure of Scientific Revolutions (1962), is perhaps the most famous philosophy of science book of the twentieth century. Many people still consider themselves Kuhnians, and obviously at its heart, there’s this idea of paradigms in scientific revolutions. Many people who’ve absorbed that book say, well, this shows that we are in a paradigm now, and we should expect revolutions in the future just as we’ve had them in the past. I’m reacting to that as well, trying to argue in the book that at least with some things in science, we shouldn’t be expecting revolutions in the future. We actually can. Imagine a future, you know, where people look back and say, oh, they used to believe in dinosaurs. That was just their paradigm. We don’t believe in them anymore. Or they used to believe smoking causes cancer. That was just their paradigm. We don’t. There are cases where Kuhn’s argument of symmetry— revolutions in the past, therefore revolutions in the future—can go too far, or start to look absurd.

 

JB: You’ve mentioned several things I was going to pick up on. Of course, the legacy or the long shadow of the Popper/Kuhn phase, or the Popper/Kuhn framing in history or philosophy of science, is one of those. I think it makes sense to say that a lot of discussion around the concept of epistemic humility today actually, if it doesn’t necessarily evolve from there, then certainly relies on both arguments, and reproduces them in some sort of way. I think one of the things that you’ve hinted at, and that the book actually does very well, is in contemporary iterations, those arguments barely actually delve into the substance of how science is done. They barely delve into the substance of what scientists believe in, regardless of whether it is the epistemic precept of their work, or the actual broader context in which they operate, or whether it is generic beliefs that perhaps do not have anything to do with their specific narrow disciplinary field or area of expertise. The polling of people that tries to determine whether even anti-realists believe that some things are in fact real is one of the good examples of this, and is one that I often also use in these cases.

I would like to go a bit deeper into the epistemic space you have mentioned, because I think that obviously one of the core questions is, in what epistemic space are these arguments and these discussions playing out today? And what does that mean both for what kind of discussion your book is aiming or intending to contribute to, but also how we can make use of those arguments, or related arguments, or historical arguments, including the falsification debate or legacy in the context of, say, science’s social life.

There’s another sense of certainty which is much more common, which is just that something is far beyond reasonable doubt.

I would start with a very broad question to you, which is, who is the primary audience? Is there a primary audience at all for your book? Who does the book engage with? Because on the one hand, you do engage with a kind of realism/anti-realism debate, which is often considered a more narrowly, properly philosophical debate. (Although I must say, in my experience, it is one that increasingly attracts people who are not philosophers, and often those arguments really play out on the non-philosophical plane.) But then again, discussions about certainty are about the degrees to which we can be confident about certain kinds of scientific knowledge claims. For instance, the way that the Intergovernmental Panel on Climate Change (IPCC) reports today use confidence intervals, although in slightly more, colloquial or popular parlance to actually try to argue for a specific policy interventions. That also happens in that epistemic space. You can either treat this as a very broad question, or as several questions within one. That is, how can the book speak to different audiences?

 

PV: Ultimately I’m hoping that it will have an impact on quite a wide audience and attitudes toward the relationship between science and truth. It’s supposed to have some impact on raising the profile of science as a premier means for getting at the truth, at least sometimes. Of course, there are many, many cases where we have to say that we just don’t know. We have a good theory, but it could be overturned next week, next month. Sometimes there’s a consensus that a theory is the front runner, but there’s not a consensus that it’s true, you know. We’re far from that. It’s just the best, and everyone agrees that’s the best theory we have right now. The basic idea, first and foremost, that I want to put across is that there is such a thing as established scientific fact. On the one hand, that may seem banal to some people, especially when you tell people the kind of examples you have in mind (like dinosaurs once existed). I’m not trying to be ambitious with the examples. I still think it’s important to try and have a book which establishes that there are such things. And then we can look at other examples which don’t quite meet my high bar, such that they would count as established scientific fact, but nevertheless come close in some respects. Examples where there’s a strong consensus, but there isn’t an absolutely solid scientific consensus and there’s still some debate happening.

One of the examples I discuss in the book is the asteroid impact/extinction of the dinosaurs case, where there is still some debate happening in the community. But you know, you can still say that there’s a strong consensus on one side, and of course there are lots of examples where there’s still uncertainty, but we have to act. And in those cases, we might openly say, look, there are no solid facts here to work with. The scientific process is still happening. But at least we can say, look, if you look at the criteria that would establish it as a fact, you can see more or less how close it is, you can gauge how close we are to that.

To start with, the audience is going to be a very wide audience. Everyone who cares about the relationship between science and truth and has to make decisions based on that. The COVID pandemic is a good example for many of us who didn’t think much about science, at least in our everyday lives. Someone like me obviously thinks about science a lot as a job, as a profession, but not in my everyday life. It doesn’t come up much. But in the case of COVID it does, because I have to make decisions about how to act, and we all have to make those decisions, and we’re being informed ultimately by a scientific community, even if that sort of went via politicians

There are these cases that come up in our lives where we have to make judgements about whether we’re going to trust scientific claims, and, if so, which scientists we are going to listen to, especially if different scientists are telling us different things, and we have to make judgements. One thing I’m hoping to do is to give people some tools to make those kind of judgements, which are a little bit different from the tools that have been used or offered in the past. If you think about what we learned at school in high school physics, chemistry, and so on, one thing that I end up arguing in the book is that those tools aren’t that helpful because the vast majority of us don’t know enough chemistry or physics or biology to judge scientific claims. We’re not reading the literature. We’re not looking at journal articles. Or if we do, we look at one or two. We don’t have time to review lots and lots of journal articles. So, the short call, according to the book, is to try to judge the strength of scientific community opinion. That’s a whole different set of tools that you need. You need to be able to identify the relevant scientific community in the first place, and that depends on what the issue is. If it’s COVID, then the opinions of cosmologists probably are not relevant.

Then you have to have tools for trying to work out if there’s serious debate within the community, or whether there’s a weak consensus—wherein we might say, 80 percent of scientists think that vaccines are effective. Or there’s a really strong consensus, which would be maybe above 90 percent or 95 percent. You never get to 100 percent in science, so you can’t ask for that. You can’t say, well, I’ll only trust when there’s 100 percent consensus. In the book, that’s why I opt for this high bar of 95 percent as a useful compromise between 100 percent, which is obviously too high, and lower numbers which would invite possible counter examples from the history of science.

The basic idea is then to get people thinking about the tools that they would need to judge these things. Instead of trying to learn the science and work out for yourself, as the scientist would, whether COVID is caused by a virus or something else, there’s this other route to making decisions, which is to identify who the experts are and see if there’s a solid community consensus. We aren’t taught those tools at school. I do argue in the book, in the final chapter, that this ought to be part of our schooling, this social epistemology—how to measure the opinions of expert communities, which is not easy to do, but it’s easier than trying to learn all the science yourself, and trying to be the scientist. So that’s certainly one audience.

I would also mention scientists themselves, because I think scientists themselves often don’t know when to call something a fact. An IPCC author recently said this explicitly in an article. When they’re writing a report about climate change, and something is not yet a fact, they have to put in brackets after the statement “high confidence” or “very high confidence.” But if it is an established scientific fact, they can just state it. It doesn’t have to have that qualifier on the end, “very high confidence.” So this does actually matter for the way these scientists write those reports. When they were interviewed recently, this writer said, we could really do with a theory of when a theory becomes a fact. Ernst Mayer, the famous biologist, once said essentially the same thing. He was surprised that philosophers hasn’t come up with an account of when something becomes a scientific fact. That hasn’t really been on the agenda for philosophers of science, even though it’s something the scientists themselves want or even need. There are various different audiences, but that gives you an idea of the audiences that I had in mind.

 

JB: I would say now with my hat on as a sociologist, which is a hat I don’t often use, other communities have come up with the definition of when something is a fact. The recently passed Bruno Latour, for instance, in his famous discussion of facticious, does more or less exactly this. But it is a social account.

Perhaps we can go back to how a focus on social elements of knowledge production, especially how justification and legitimation contributed, not always in good ways, to the anti-realist fire, or the ways in which some anti-realists have managed to use their arguments to talk out when it comes to things such as climate change. I had a brush with that in some senses, myself, when I published this infamous article about how there’s no such thing as following the science at the very start of the COVID-19 pandemic, the point of which was obviously to draw attention to the fact that the problem of political accountability is a problem of politicians, not a problem of science. But some people have taken it to be an anti-realist claim, saying oh, well, we don’t actually know anything about COVID. Whereas, in fact, we already knew a lot about COVID, and these things turned out to be reliable scientific fact. Which actually brings me to the third area that I wanted to explore.

The way to wash out biases is to take the considered judgement of a whole community that has somehow formed a consensus of an idea.

Speaking of different audiences, this is more properly my domain of research and interest. I fully agree with your argument that a lot of this has to do with the fact that people are not taught how to analyze, compare, and evaluate arguments—if those arguments are too disciplinarily specific, or if they require too high an epistemic threshold to understand them. Elizabeth Anderson has also suggested something that I think is very similar to your proposal in terms of how to determine or have confidence in expert agreement or around expert consensus, or what should be considered proper expert consensus. But I’m more interested in the other side of knowledge claims.

Why is it that people—if we take into account that people sometimes claim not to know, or in some ways choose not to know—choose not believe science, choose not to even know what science is, not necessarily because they cannot understand it, or because it’s too complex, or because they don’t have the background knowledge necessary to assess the validity or reliability of particular scientific claims, but perhaps because there are other reasons. In the piece that I wrote for the Philosopher about a year-and-a-half ago, the epistemic autonomy and the free nose guide problem, I engaged with this, and that’s the problem or the concept of epistemic autonomy. I really like the dinosaur extinction example because it’s a clear one, where in some ways nothing hinges on whether someone accepts the asteroid extinction theory or not. Regardless of whether you believe that an asteroid hit was what contributed, if not directly caused, the extinction of the dinosaurs, your life will not really change much. So what would make people want to claim that this is something that they’re not certain about? How should we approach the claims, or how should we approach the skepticism, that it’s not only or not at all an outcome of ignorance in the traditional sense—or lack of knowledge, or lack of epistemic capability—as much as an expression of a desire for epistemic autonomy or independence, no matter how misdirected?

 

PV: I do touch on this in the book quite a few times. There’s this quote from Douglas Alton, who’s a philosopher of education amongst other things, saying that a big goal of education for a long time was intellectual independence for all. One of the ideas behind this is that we just need to get everybody more educated about science, and then we can all benefit from that. Because often the problem is that people are making judgements from a place of significant ignorance, and we just need to improve science education, and people can make better choices about whether to get their children vaccinated, or whatever the agenda may be. Alton argues that it will be better to teach people to trust the expert consensus. I think that’s what he’s saying. And that’s basically what my book is saying.

Now, why would that be the case? Well my book argues that everyone’s in the same position. And this is crucial. Even the individual scientist doesn’t get to just look at the science and decide what it says. They themselves are just one person in this huge machine, so they themselves have to do the social epistemology. In that sense, they’re in the same boat as you and I, because scientists have all sorts of different instincts. Some are more conservative than others. Some are mavericks. Some embrace ideas very early on when the evidence isn’t really there. But they go with it anyway, because they want to be ahead of the curve. We saw this in the history of many cases.

If you take atomism, for example, there is a range of opinions in the scientific community—from the ones who jumped to atomism and felt like they knew it was true straight away, to the ones who lagged behind for a long time. Then you have some who tried to be more in the middle. But the idea behind the consensus approach is that it’s in there, when you wash out all those differences, it’s the community judgement as a whole that really tells you the way to go. And it’s only when the whole community is pulling behind an idea that you can say, despite all the differences in that community, and the scientists’ different political affiliations, different personalities, the ones who are more cautious, the ones who are less cautious, despite all of that diversity in the community, they’ve reached this big consensus. And that doesn’t happen easily.

Although there is this realism, anti-realism debate, where we look split on the science and truth question, all of us are pro-science.

Looking at cases from the history of science, there really has to be a huge amount of evidence to really get a whole community that’s that diverse to pull together in the same direction. If it’s at the community level that we can trust opinion, then the individual scientist can’t say, oh, you lay person will have to listen to me. I’m the expert. They themselves don’t get to just look at the science and decide. They themselves have to look to the community just like anybody else. So there isn’t this sort of big distinction between the way the lay person accesses information, in other words, they get told what to believe, and this other way that the scientist accesses information, because they actually look at the science. It’s not like that. I argue that we’re actually all in the same boat. Scientists can make big mistakes, and we’ve seen this in the history of science.

In the book I mention some of the most famous quotes where a scientist has been super confident about an idea, and they’ve said, this will never be overturned, and then it is overturned. One reason that’s happened is because they have one perspective on the evidence. They have all of those specific background assumptions, which are biasing them in one way or another, and they are very confident as a senior scientist that they can make the judgement on their own. But that judgement is extremely impoverished compared with the whole community’s judgement. We have these famous quotes where individual scientists, often quite senior scientists, come out and say, this is certain. And then it’s wrong. What is much rarer is for the whole community to form a really strong consensus, and then for that to be wrong.

I think people want intellectual independence. Ideally, you would go and get an education that enabled you to go and look at the science for yourself and make your own mind up about vaccines, climate change, the link between smoking and cancer, whatever it is, so that you could look for yourself rather than just trusting somebody’s judgement in a book. I don’t think we can do that. These issues are too complicated. We have our own biases as well. Each individual has to accept their own biases. Each scientist has their own biases. The way to wash out the biases is to take the considered judgement of a whole community that has somehow formed a consensus of an idea, despite all of their different background assumptions, all of their diversity. So the idea is to say, we each must come to terms with the fact that we can’t go out there and make these judgements for ourselves very reliably in most cases. But we can still get some idea of the science. We can still learn about the greenhouse effect and get some sense of why the Earth is warming. But the details are bound to elude us at some point.

 

JB: So you do circle back to epistemic humility, which I think is a great argument, because normally certainty and humility are seen as, if not mutually exclusive, then opposing. In that sense, I think this is brilliant. I want to ask you for a tiny comment on one of the things that I’m particularly interested in, in relation to the argument of the book, and that is this part that some scientific claims can be considered to be true and will hold as long as science continues. What is for you as long as science continues? What is the caveat associated with that? What does this mean? What are the social conditions or social epistemological conditions that assume science continues in more or less unchanged, or sufficiently unchanged, form counting from today?

 

PV: I haven’t thought about possible evolutions of the human race very carefully. The basic idea there is that we can imagine a future where scientific communities have been almost completely eroded, and, in that case, that community wouldn’t be there to make those judgements. I think there are various things that need to be in place. You need a large community, because if you only have a small community you can have less confidence that they’ve reached this consensus for the right reasons. There’s a limit. Imagine a community of ten people that could easily all agree. But it’s not a very reliable indication of the truth if there’s only ten of them. That’s just the extreme. But if the community’s large and diverse, different backgrounds and a reasonable gender balance, balanced along political spectra as well, that’s how you wash out the biases. If you don’t have that large diverse community in place, with the kind of freedom of thought that most scientists do have today, then if that were to disappear (and we could imagine reasonable scenarios where it would be gone a few hundred years from now) we’d lose our very best way of identifying truth. Knowledge is linked to communities, that’s the basic idea. But I haven’t worked out the details of that story yet.

 

JB: Starting from the last thing you said, I’m going to group two questions we have, because it seems to me like they in fact address the same thing. One is that you seem to mostly rely or provide examples from natural sciences, and would you say that this applies to social scientists, or social sciences, as well? And then we also have a point which to me is very similar to this one, which basically says that the existing consensus about the economic political system, which I think is probably the capitalist consensus, can be compared to the scientific consensus in the sense that anti-capitalist thinking and anti-capitalist ideology are marginalized. Would you say that your argument would still apply to social sciences and humanities?

People should learn to trust the expert consensus, as it is at the community level that we can trust opinion.

PV: I think social sciences are more difficult. With social sciences, we don’t have the kind of evidence base that I could produce in the book. In the book I present as evidence these examples where a consensus formed, and then later on we developed technologies which enabled us to look and see what the consensus was about, and showed the consensus was correct. An example would be continental drift—a strong consensus formed, and it met my criteria, but then later on, we put satellites in space, which can see the drift happening in real time. And there are lots of examples like this,  where we develop technologies to make something observable that was previously merely theoretical.

In the book, I argue that in every case where we’ve met that consensus, met that high consensus threshold, and then later developed technologies to look and see the thing that was previously theoretical, the consensus has been correct. There’s never been a case where we met that high bar and then the technology showed that the idea was wrong. This is an important argument in the book. When it comes to the social sciences, we don’t have that kind of argument in place. The examples where we had a theory, we met my criteria of really strong consensus in a large, international, diverse scientific community, and then later on we developed some technology or something which could essentially prove that the idea was correct and thus linked consensus with truth, like in the continental drift case. I don’t think you can present quite the same argument for social sciences. When it comes to examples from the social sciences, I struggle to think of any. Maybe there are one or two examples in psychology where there’s a really strong solid international scientific consensus, but it’s hard to think of any, and it would be something like depression is a genuine mental disorder, not just a mood or a character trait. But even then, I don’t think you’d get an international consensus, because even the very terms are disputed—like the word disorder would be disputed probably.

 

JB: I’m going to jump in with another audience question. Speaking of consensus, and your investment in consensus, what do you see as the main difference between yours and Naomi Oreskes’s Why Trust Science? (2019), given that she also focused on consensus as a metric of reliability. Is it simply a philosopher versus a historian’s perspective?

 

PV: I discuss Oreskes’s book quite a bit, and I do present my work as building on hers. I think she’s done fantastic work, especially Merchants of Doubt (2010). I think Why Trust Science? is a good work as well. But there’s something that I think is confusing, and that is that Oreskes says a few times that history shows that we can’t be sure about anything in science. That’s directly opposed to what I’m saying. Although she puts the focus on consensus and second order evidence approach, evidence about evidence, that’s my approach as well. But she definitely wants it to be about why we should trust science. She says explicitly a few times in her work, not just in that book, but in some of her scientific papers, that history shows that nothing is absolutely for sure, and that we have to be extremely modest. She makes statements which sound very anti-realist. In the book, I do discuss her work quite a bit. It’s a bit nuanced because I’m building on her work quite a bit, and I think a lot of the work is great. But then there’s what I see as a tension in her work. Though it’s very pro-science, many historians do take this approach saying that history shows that we’ve been overconfident many times in the past, and we shouldn’t be overconfident now. We should be extremely modest. They also like the symmetry argument: loads of changes in the past 500 years of scientific thought mean we should expect loads of changes in the next 500 years. I do agree with her up to a point, but I also think that we can find things that we can be confident will continue into the future.

 

JB: Moving on to a room question. Any thoughts about the lessons to be learned from the ongoing controversies about the scope of scientific claims? For example, Benjamin Libet’s claims about disproving free will?

 

PV: It’s not easy to find scientific claims that I think are future-proof. In Chapter 1, I give thirty examples. But they weren’t that easy to articulate. As soon as you bring in claims that are more philosophical, suddenly there’s huge disagreement. Even with a word like causes; you would never get 100 percent consensus for smoking causes lung cancer, because there would be a significant number of scientists who would say, it all depends on what you mean by causes. That’s a contested word. There are loads of philosophies about what it could mean, and different theories of causation. So you have to be really cautious, I think, with your terminology. Even with what on the face of it seem like the most innocent claims. As soon as you bring in a word like will, or a term like free will, certainly any consensus is destroyed. It’s the same with continental drift. As soon as you start mentioning plate tectonics, you realize there were debated issues in plate tectonics. The basic idea that South America split from Africa some 140 million years ago, that is an established scientific fact. But as soon as you start to say anything about why, you have to be really careful about the words you use to describe the claim.

 

JB: That brings me to the last two questions, which I will morph. One is whether realists and anti-realists agree on the facts. One question says that anti-realists tend or seem to be more on the side of post-truth, and that realists hold onto a strong fact/fiction distinction. Another person is asking something that I think is related, which is, can you clarify how scientists can resist spreading falsified knowledge? For instance, conspiracy theories or scientific knowledge in a pandemic. Do we need the strong realist claim? Is post-truth an enemy of science in the context of a pandemic?

 

PV: I sort of see myself as fighting against conspiracy theories, but there is a fine line between a case where it’s not obvious and it is—when the scientific community is so small that you can ignore it, and when a scientific community is significant enough that you have to say that there’s a strong consensus, but it’s not universal.

As I say, if you wait for 100 percent consensus, you’ll never have it. But there are all these cases where we’ve got about a 95 percent. But in all those cases, it’s helpful for the consensus community to be pushed by the small minority. It forces them to work much harder to establish their claim.