This is the interview with Hugo Mercier from episode 009 of the You Are Not So Smart Podcast.
Download – iTunes – Stitcher – RSS – Soundcloud
Hugo Mercier is a researcher for the French National Center for Scientific Research who shook up both psychology and philosophy with a paper published in 2011 titled, “Why do humans reason? Arguments for an argumentative theory” (PDF) that proposed humans evolved reason to both produce and evaluate arguments. Respected and well-known names in psychology like Steven Pinker and Jonathan Haidt have both praised the paper as being one of the most important works in years on the science of rationality. You can find his website here.
David: Hugo, I’m so happy to have you on the show. Your model, the argumentative theory, is so fascinating. And it suggests something really interesting, and that is that arguing isn’t really a bad thing at all. And, could you explain from the perspective of your theory, what is the purpose of arguing?
Hugo: Well thank you very much David for this interview and for the kind words regarding our theory. So arguing, and first the purpose of arguing is to overcome what Dan Sperber, my co-author, calls trust bottlenecks. So sometimes, someone you know will tell you something, and you won’t take that on trust. You won’t just take their word for it. And in these kind of cases, it becomes very useful to have argumentation to be able to convince people, rather than just give up or try to use your power or something else to get them to change their mind. So we can see, I think, how evolution, throughout our evolution, this sort of mechanism would have been useful. I mean, if you – even if you think of a very everyday life situation, if you’re discussing with your partner and you’re having a minor, disagreement over what kind of movie do you see and what not. If we are unable to exchange reasons for why it’s better to see such and such movie or why it’s better to make a right rather than the left. We wouldn’t be enabled to make any kind of smart, quality decision So I think that’s why argumentation is so crucial. And just it might be useful to make kind of semantic clarification. So in English, it’s problem we don’t really have in French, which is also my native language. But in English, arguing sort of has 2 meanings. One is kind of nasty, which is really when you’re having a fight with your spouse or something like that. And it’s just a shouting match. And the other is just exchanging arguments. And that is only this certain meaning we have in mind. So we’re not saying that people should – obviously shout at each other. And just that they should exchange reasons for their points of view.
David: So it’s sort of a, if I’m understanding correctly – it’s sort of like it’s a communicative – it evolved to be a communication facilitator. Sort of a filter.
Hugo: Exactly.
David: A filter between the sender and the receiver of new information.
Hugo: Yeah.
David: So when it comes to – and I think we all have experienced, especially with the new, with the internet and with social media and with tools like Twitter, that people tend to get into a lot of– People tend to get into a lot of arguments online. But both versions, as you were saying – the nasty and the one where you’re trying to get people to see your point of view. What keeps people from just arguing constantly and never moving forward past their points of view?
Hugo: Well obviously kind of depends on the context. And if we just look at internet comments, it might make one despair about people’s ability to look beyond their own point of view. Although, I mean, even on the internet, it depends on – and some forums have very well educated, well spoken – and just reasonable people who are willing to change their mind. I mean it’s not – not everything is YouTube comments. But so, I mean, at least if we’re right, and I think the experimental results support our theory, people have to be able to change their mind for argumentation to make sense in the first place. So if people were just exchanging arguments, and they would never change their mind when confronted with good arguments – then people would just stop doing it. I mean, there would be no point in bringing arguments forward if there was no chance at all that they would ever be effective. So the very fact that we are putting forward arguments, means that somehow we hope that they can succeed and I think if they really never succeeded, people would stop – at least co-evolutionarily, and it wouldn’t be stable. That people would just say– It’s like, imagine everybody – if everybody was deaf. People would stop talking, and it’s the same thing with arguments. I think it’s also very tempting when we think of arguments, perhaps because of this double meaning it has in English, to think of kind of the worst type of arguments. When you know, you’re talking politics with someone from the other side, and no one really budges or anything. Whereas in fact, most of the arguments in the – can more generate meaning, most of the reasons we’re exchanging in everyday life, are about much more trivial matters, such as, who should be cooking tonight, and what movie we should watch and those sort of things. And in these kind of cases, arguments do work fairly well.
David: Now what’s great about your research and your work is that it sort of re-imagines the very idea of reason – reasoning and rationality. And in your work, you say that reasoning doesn’t necessarily lead to more accurate or better estimates of correctness. Or even superior moral judgements. So if that’s true, what is – from your perspective, the purpose of reasoning?
Hugo: That’s a very good point. So the – what we’re saying is that reasoning doesn’t do that for the individual reasoner. So that when people are trying to reason on their own, usually you are – at least in many cases. It’s going to have either no consequence, because people will just find reasons for whatever they were thinking, and they’ll just stop there and not do anything more. Or in some cases can even lead to bad consequences. So there is this thing that psychologists call the confirmation bias. And you’ve talked about it a lot in your previous writings. And this is basically that people, when they reason, they tend to mostly find arguments for whatever belief or decision they already have. And so when you’re reasoning on your own, you don’t have any sounding board. You don’t have anyone to tell you that there are other points of view to consider, or that your arguments might not be so strong or that they might be counter arguments. And as a result, you’re likely to become more polarized, and even more sure – even more confident that you’re right. So that’s why I think people often, when they reason on their own – don’t really improve their belief. So their decisions or make more moral decisions. On the other hand, reasoning should lead to these consequences. When people are reasoning with each other, when they switch arguments – when they are arguing in the nice, nice meaning. And I think then again that a lot of experimental and historical evidence supports this, this contention. In that when 2 people are arguing about the mathematical problem or logical problem or the factual problem on many different types of issues. And the one who is right is actually more likely to convince the other one than the other way around. So that in the end, both have – I mean at least one of them has better beliefs than what she started with. In many cases, both of them will end up with better beliefs, because they’ll be able to tell, well, you were right on that point, and I was right on that point and we’re just going to combine these beliefs and make a better one together.
David: And how does your view differ from the conventional view of reasoning, would you say?
Hugo: So the main difference is that – for the conventional view, basically the lone reasoner should be doing all of that. So the classic view of reasoning is that you should be – people should be reasoning on their own before making a decision, in order to make sure that they’re right. And even though it’s all well and good as a recommendation, it just so happens that people are not really fully equipped to do that in many cases. And really, indeed it – it really goes against the grain to really cautiously examine our own beliefs and make sure that we have good reasons, rather than just find more arguments for what we already believe. So the main difference is not that we’re saying – like the classical view is – reasoning leads to better beliefs. And we’re saying, “Reasoning does not lead to better beliefs.” The classical view is – reasoning leads to better beliefs and better decisions when people reason on their own. And what we’re saying is that, these positive outcomes are much more likely to be reached if people reason together.
David: See I love the way you approach this, because often the very first thing I try to get across in say like a lecture is that – people tend to put reason and rationality and logic over on one side, and they put rationalization and bias and everything on the other side. And they see it as sort of – it’s one thing versus the other, toward trying to get to better decisions. Because you know, you assume like, the scientific reasonable measured response to information coming into our brain is – to use reason to overcome all of our shortcomings, right? And from your perspective, reasoning isn’t necessarily all about that. It is a – it is flawed, but it’s more about– In fact, when The New York Times wrote about you, they – they sort of, somewhat sensationally said that reasoning has evolved to win arguments. Could you unpack that idea, is that–?
Hugo: Yeah no, yeah no. I’d be happy to. So it is, I mean that is basically – that is one half of what we’re arguing for. So our contention is that – reasoning evolved to argue, and to exchange arguments, which means 2 things. One of them is to produce arguments – if I want to argue with someone, I must be able to produce arguments and define my point of view. But the other, as I was pointing out earlier, is to evaluate other people’s arguments. So if you’re either unable to reject weak arguments, then you’re going to be convinced by the slightest pretext and the slightest excuse for a reason, and that’s going to be bad. On the other hand, if you are never convinced, then that’s bad as well. Because sometimes you’re wrong and you want to be convinced by people who have better opinions and better beliefs. So if we’re right, reasoning really evolved for these 2 things, which are basically one. One activity, which is argumentation, which entails both producing and evaluating arguments. And the thing that happened in the popular press, including in that New York Times article, was that people emphasized the more a kind of sensationalistic aspect, which is that reasoning just evolved to win arguments. Which is just half of what we’re saying. If you want to win an argument, the other person has to change their mind. It’s not possible, if – you can’t win, if no one– I mean not everybody can win an argument, there has to be a side that sort of “loses.” Because in many cases, losing an argument means having a better belief. If the other person was right and you were wrong, then you’re better off losing the argument. So it’s really these 2 things that can not, that don’t make sense, one without the other. It can’t just be about winning arguments if no one is producing them. And as I was saying earlier, if you’re just producing arguments and no one is ever winning them, that doesn’t make really any sense either.
David: You also say in your research, or in your paper, in your work – that reasoning doesn’t necessarily push people toward the best decisions either. But instead it, reasoning often pushes people toward decisions that are easier to justify. What does that mean?
Hugo: So I’m going to give you an example. So, I mean the, the general idea is that in many cases in the modern world, we’re facing decisions for which we have little intuitions. We don’t really know what to do. We don’t have any strong opinion to start with. And so when that happens, reasoning is not going to have a strong confirmation bias. Because basically we don’t have any pre-established belief or decision to confirm in the first place. And so it could be tempting to think – and many people have suggested – that in these cases, then reasoning should do a good job at telling well – objectively now I don’t have any, any strong preference, the reasoning should allow me to, to really get at the heart of things and weigh things carefully and get at better decisions. And what seems to happen instead is that people go for the most easily justifiable decision, which is not always the best. So I’m going to give you an example. It’s a very nice study that was done by Chris Hsee and his colleagues, I think. And so the idea is, participants in a psychology experiment – they do an experiment. And after that they are given the choice between 2 chocolates to be taken as a gift. And one chocolate is small and cheap and in the shape of a lovely heart. And the other chocolate is big and expensive, well at least bigger and more expensive, but it is in the shape of a disgusting looking cockroach. And so she asked people, “Which one will you enjoy the most?” Most people will say, “I’m going to enjoy the heart shaped chocolate most.” Because you don’t really want to eat anything that looks like a roach. However, when you ask people, “Which one will you pick?” Most people will actually pick the roach-shaped chocolate. And it seems that they are doing that because they can’t really manage to justify picking the other one. They’re going to – there is this thing, well obviously it’s rational to pick the most expensive, the biggest thing. And the fact that it’s a roach is not, it shouldn’t really have any impact. Because it’s, it’s not an actual roach, it’s just looking like a roach. And so people end up making a decision that is probably a poor decision given what we know of the psychology of disgust. They are not going to enjoy that chocolate. Because they can’t really justify doing anything else.
David: And so you bring this up several times. And this is one of the greatest things about – if you’re interested in this sort of thing. If you’ve – if you’re– ’cause there’s a deep interest in this right now in popular thought. Many books about irrationality, there are – I’m writing about it, lots of people are writing about it. And you look at several of these studies that have sort of risen to become famous examples. And you sort of re-frame what is happening in there, and give it a better explanation or a different explanation. One of my favorites in your paper is you talk about the dis-junction effect. Which is, involves that weird coin flip experiment. You flip a coin, and the subject either – they’re told that if it comes up in their favor they all win $200. And if it doesn’t come up in their favor, they’ll lose $100. And then you give the person a single coin flip, and you ask them – whether they win or lose, “Would you like to play again?” And the interesting thing is, when people win, they tend to say, “Yeah, I’d like to play again to try to win back, to get–” ‘Cause they feel like they’ve, they have less to lose, ’cause they just won some money. And even if they lose, they’ll also say, “Yeah, I would like to play again.” So it’s an interesting scenario, because win or lose, people tend to go for a second round. But you point out that if you don’t let people know the outcome of the original flip, most people choose not to flip again – even though, it’s something they would have done if they had known the outcome either way. And you sort of explain that from your perspective, is that – well they didn’t have a chance to reason it out.
Hugo: Well so it’s – so basically in our interpretation of that experiment, is basically in line with what was suggested originally by – I think it was – It was probably actually Kahneman or Tversky who first put it forward, and [Edward Shafir]. And the idea’s that – The reasons for flipping the coin another time when you’ve won the first time or when you’ve lost the first time, are different. So when you’ve won, you think, “Well maybe I’m on a streak and I’ve just – I’ve just made $200 and I can easily spare $100, so I don’t care, I’m going to flip again.” On the other hand, if you lose, what you’re going to tell yourself is, “Well maybe I’m going to do it again, because the odds are good, and I’m probably going to make up for that first loss. So that I – I end up ahead, or at least not losing any money. And the problem is that when people don’t know which outcome is going to happen, they don’t have a good reason to keep playing. Because these 2 reasons – although they’re independently good, they go in kind of different directions, they’re different and not really compatible with each other. And so, because they don’t feel as if they have a good reason to justify playing a second time, people say, “Well I’m just not going to do it.” But there is – maybe it’s slightly more speaking for people – that’s really intimidating English. Anyway, it’s more, may be easier for people to represent that as this other example that we say in the paper, which is a lovely experiment probably from the same, the same source. So you’re given the opportunity to buy a kind of hypothetical opportunity to buy a very, very heavily discounted holiday trip to Hawaii, and the trip would take place after an exam you have. You’re going to pass. You’re going to take– And in one condition people are told, “Look, you know you’ve passed the exam. Do you want to buy the great trip to Hawaii?” And people are like, “Sure, I’ll buy the trip to Hawaii. I need, I deserve a reward for all this hard work.” And then some people are told, “You’re not, you’re actually – you did not pass the exam. Do you want to buy the trip?” And people are like, “Sure, I’ll buy the trip. I need a break, I need to change my mind after this, after this failure. And then some other people, like in the coin toss experiment are told – you don’t know whether you’ve passed the exam or not. And these people, they don’t buy the trip. Because the reasons for buying trip, if you passed the exam. And the reasons for buying the trip if you don’t pass the exam are sort of opposites. And they can’t, they don’t have a good reason. They don’t have a single good reason anymore to buy the trip. And people, they don’t sort of go the extra step of thinking, “Well, I’ve got good reasons one way and the other way, therefore I actually I do have a good reason overall.” They just – they don’t go that way.
David: It’s fascinating because the– If– A lot of these experiments illustrate that if a person is given the opportunity to rationalize their choice or reason it out, or if they come up with a story to explain themselves to themselves – then it drastically alters their behavior. But if they’re not given the opportunity to do that, then it – their behavior will go in another direction, which lends a lot of credit to what you’re suggesting. Especially with – in The Sunk Cost Fallacy – you write about how, traditionally when people write about the sunk cost fallacy, they write about how people just make a really poor decision, because they’re trying to justify how much they’ve invested in a project. And so they just keep investing it, even at past the point whenever it would be wise to abandon it. But you write about how that – people, when they’re engaging in that behavior, if they’re given the opportunity to justify waste, then they actually will tend to not be so trapped by the actual sunk cost fallacy. So from your perspective, what’s going on there? What is the difference between falling for the sunk cost fallacy and not falling for it because of reasoning?
Hugo: Well so, the idea in the sunk cost is that people find it hard to justify something that they see as wasteful. So when they have a choice between, as you are describing – to keep investing in something even though they’re never going to really make any money out of it, and it’s going to cost them money. And are just avoiding a project that they’ve already heavily invested in. Most people will choose to keep investing in that project. And what’s interesting is that for instance, children who feel less pressured to justify everything they do – they tend to be much less susceptible to that fallacy. So that they just – they do whatever is right at the moment, rather than having this pressure to feel, “Okay gosh, now I’ve got all that money that is spend already. I can’t justify just dropping everything now.” On the other hand, as you are saying, if you give people a valid justification for dropping everything, then they’ll do it. So on some level, they realize that the rational – what the most rational thing to do is. But as long as they don’t feel they can justify it properly, then they don’t do it. But what’s interesting as well in this case is that for let’s say business school students or students of economics who have been taught about the sunk cost fallacy. Then, if you ask these people to reason more about it, then they are more likely to get it right. Because they are more likely to remember what they’ve been taught in class. And then they have a good reason to not do the fallacy, because they’ve been taught, “Well, look you shouldn’t be doing that. This is, the sunk cost fallacy, this is a bad thing. And so, in this case, basically if you haven’t been taught economics 101 – the most available, the most easily available justifications in that case point to the wrong decision. However, if you’ve been taught a bit of economics, once you’ve read about that fallacy – then the most available justifications point in the right direction. So what’s interesting about that is that it shows how – it’s not so much whether you’re a good reasoner or not that is going to matter. But you know what the rules are and that you’ve learnt. And what other people in our environment, in your– people who matter to you are going to think are good reasons. And that’s going to– I think that’s one the important messages of our theory. Is that, really to reason well, it’s much more important to be in the right context, rather than to be a good reasoner intrinsically. I mean, we’re not saying that there are no interpersonal differences in the reasoning skills at all. I mean, I’m sure there are. But if you take even some – like an extremely smart, and a genius type Newton person. And if you put them on their own reasoning about something, in many cases it’s not going to work so well. Whereas the same person, or even a person of lesser ability – if you put them in a group and you let them discuss and argue with people who disagree with them, and they’re going to do just fine.
David: Right, that really sort of blew my mind thinking about– You bring up a lot of times in the paper that truth wins, I think say over and over again. And that people in – some of the more famous experiments like the Wason selection task and the Wason matching task – those things that often are taught in psychology classes. When an individual is doing those tests, they often fail a lot. But when people are put into groups, it changes the dynamic so much that people don’t fail as often. What is the difference between arguing amongst – in your own mind, and arguing with others in a group, and how does it improve the outcome?
Hugo: So I mean, basically the – argumentation is just playing its role, so that it’s letting people who have the right answer convince others. That’s most of what’s happening. So I’ll give you an example. So, we don’t have any published data on that, but I’ve got some data so I kind of know it works well and it makes sense given the – given what we know of other experiments. But this is much easier to explain than the Wason selection task. So, psychologists now use a lot, a problem that is known as known as the bat and ball. Which is a small kind of mathematical trick. So you’ve got a bat and a ball, and the 2 of them together cost $1.10. And you also know that the bat costs $1 more than the ball. And the question is, how much does the ball cost? And most people answer, that the correct answer is 10 cents. And in fact, the correct answer is not 10 cents, otherwise that kind of wouldn’t be fun for psychologists, so 10 cents doesn’t work. Because if the bat costs $1 more than the ball, then the bat has to cost $1.10. And then the 2 of them together cost $1.20 and not $1.10 as they should. And the correct answer is 5 cents. And then you have $1.05 and it all fits. So if you give that problem to individuals to solve on their own, most people will have the first strong immediate intuition that the answer is 10 cents. And then what they are going to do is, they’re– They’re not going to answer that right away in most cases. They’re going to reason about it. They are going to – what they they’re doing is trying to make sure that they’re right. But in fact what reasoning is doing is mostly finding reasons for their wrong intuition. So people aren’t going to think, “Well, sure 1 dollar – there’s this 1 dollar, there’s this $1.10 there. It has to be 10 cents. It can all make sense. And they don’t really realize that their reasons are poor and that their intuition is wrong. On the other hand, if you put people in a group, basically 2 things can happen. Either everybody got it wrong, in which you’ve got it wrong – in which case, well nothing much is going to happen. People are not going to argue, because they all agree on the wrong answer. Or someone got it right, and that person is always going to be able to convince everyone that this is the right answer. And as a result, if you put people in groups, the performance will improve dramatically. Just because they are likely to have been confronted – someone with the right answer. And what’s important in this case is that it really has to be argumentation. So if you just tell people, “Well look, the right answer is 5 cents.” Or if you provide them with the choice between the answer is either 5 cents or 10 cents. People, they don’t change their mind. They don’t say, they don’t just realize, “Oh Gosh, obviously I was wrong, this is 5 cents. You have to actually, you have to talk them into accepting the 5 cents. And so, that’s the case in which when reasoning, what reasoning does extraordinarily well, what it’s supposed to do. So it allows those people who have the right answer to find arguments to convince the others. It allows those who had the wrong answer to accept these arguments and to see their strength. When sometimes it takes a bit of time, sometimes you can take 5-10 minutes. But then everybody basically – I mean, in my kind of – I’ve done that with a lot of people. And I’ve had one person once who I wasn’t able to convince. That was not a very, very mathematically literate person. But basically, as soon as you’re able to grasp the mathematics, which are pretty basic – everybody becomes convinced very quickly that they were wrong, and the correct answer is something else.
David: That’s fascinating, and I think that you can read into – not only what’s been written before, but the way that you’re presenting it. And see it as people have intuitions and they have existing opinions and beliefs. And then they sort of examine those things and try to justify why they feel the way they feel. And that comes across as – well that’s just how a lot of people approach reality, right? But despite all of that, you write in your paper that you don’t see reasoning as a flawed mechanism. So why is that? After everything that we’ve said about it, why is it still not really considered a flawed mechanism in your perspective?
Hugo: So it’s not because people have been thinking about it the wrong way. They’ve been thinking about it as a tool for individual cognition. And if that was the function of reasoning, it would be terrible. It would be like the less, the least adapted mechanism that ever showed it’s face. It’s really – basically it’s doing the exact opposite of what you’d like it to do. So, and that’s why people indeed have been writing about reasoning as being flawed. So if you’re [?] and you want to make sure that you’re right about something, you want to do 2 things. You want to find reasons why you might by wrong. And you want to make sure that the reasons for whether you’re right or wrong are good, sound reasons. And reasoning does the opposite of that. It only finds – or mostly finds reasons for why you’re right. Showing that you’re right anyway. And it doesn’t really care whether the reasons are good or not. It’s sort of very superficial. It’s very shallow. It’s satisfied with kind of poor reasons. And all of that looks terrible for the individual reasoner and indeed it is, and that’s why we have all of these negative outcomes that we were talking about earlier. On the other hand, if you think of reasoning as something that is designed for argumentation, then all of that makes sense. Because, if you want to convince someone, having a confirmation bias is exactly what you’d expect and what you’d want. I mean, if I want to convince you finding arguments against my point of view is not going to be tremendously helpful. And actually interestingly as well – if you’re in an interactive context, so if we’re exchanging arguments together. I don’t really have to find – extremely strong argument. I don’t have to be like a lawyer who would prepare her arguments for days and days. I can just – I give you an argument, and then if the argument is good, then you’re going to be convinced, and that’s fine. If it’s not really good, then you’re going to give me a counter argument, and I’m going to address that counter argument. And or, I’m going to try out another argument. So these features of reasoning that make it look like a really flawed mechanism – if you think of it as something that serves individual purposes. Actually if you think of it as really something that is for argumentation, it all makes sense. It becomes something that is extremely well tailored to the task in a way that I find quite inspiring, and at least sort of beautiful in the way – whenever you see something that is well adapted to a given task in nature, it’s always quite nice and to see how well things work.
David: Yeah, I had the same feeling reading your work. In that, people get down when they hear – when they read a lot of, about confirmation bias and motivated reasoning and logical fallacies and all that stuff. People tend to get sort of depressed in a weird way, to say that, “Well if we’re so flawed, what is the point? How do we get over all these things?” And from your perspective, you talk about how well the stuff – it’s more about– And you can correct me if I’m wrong, but the way I’m reading it is that– When we – we generate. The way we generate our arguments for things is flawed. But the way we evaluate arguments is pretty good. And that’s really sort of detailed well in – you actually write in the paper that, “In most discussions–” I’m quoting you here. “Rather than looking for flaws in our own arguments, it’s easier to let the other person find them, and then only then adjust our arguments if necessary. And I just – that’s basically how science works. Is people generate arguments and other people attack those arguments. And then over a long period of time, we get closer to the real truth. Is that – is that how you see it?
Hugo: Exactly. So I mean, I just – I would just slightly object to the characterization of the way we define arguments as flawed. Because it’s biased. But not every bias has to be a flaw.
David: Right.
Hugo: So for instance, if you have a mechanism that aims at avoiding poisonous mushrooms, you want it to be biased towards saying that the mushroom is poisonous rather than palatable. Rather than the other way around is better to avoid a mushroom that was fine, rather than to get poisoned because you thought it was palatable. So it’s not, I mean, so clearly we have these biases. We have this confirmation bias. We tend to use shallow arguments. But this is all fine, in the context of a discussion. Not only is it fine, but it’s probably even optimal in that it can create, as you are kind of saying with the idea that we let other people find flaws in our arguments rather than do it ourselves. I mean it makes a nice division of cognitive labor. So typically if, I mean, for me to be able to anticipate why you might disagree with me, I would have to do a lot of cognitive work. Because you’ve got a lot of beliefs I don’t have, and it would be very hard for me to anticipate why you might think such and such, so– I mean, to give you like an everyday life example. If I want to convince you to go to a given restaurant, you might have very, very different reasons for not wanting to go to that restaurant. You might not – let’s say to a Japanese restaurant. You might not like Japanese food. You might think it’s too expensive. You might – maybe you’ve gone to a Japanese restaurant 2 days ago. And I can’t anticipate all of that. So if I just tell you, “Well look, we should go to that restaurant and see it’s a good restaurant.” Then you might tell me, “Well look actually, I don’t really like Japanese food.” In which case I could tell you, “Well look, but they also serve Thai food and something.” I don’t have to anticipate every possible reason. And I can’t even anticipate every possible reason you might have to disagree with me. So it’s much better to just start with a kind of weak argument. Like saying, “Well it’s a good restaurant.” Rather than have to list all the potential arguments you might have for wanting to go there. Or try to really spend hours anticipating. “So okay so, maybe he doesn’t want to eat there because he doesn’t like Japanese blah blah blah.” And I mean, that’s the way– You can really feel that in the way you’re comfortable with people you know well. Your comfortable making suggestions without having to think too carefully about what you’re saying. And if you can try that with like someone you know, like the first date or something. It means you’re going to overthink everything, because you don’t really know the person. But you want to really be very careful not to say anything foolish. It’s extremely uneconomical and kind of painful and effortful and not very pleasant. And then when you know people, you can just keep all that, and you say things. And if that doesn’t work, that’s fine.
David: Right. And you can – if I’m reading you correctly, you could see it as a very adaptive response to many different minds with many different perspectives. Because it can scale up to bigger problems. Like you, you write about – we abolished slavery over the course of many many different arguments from many different perspectives being considered for a very long time.. And in our country, we’re doing the same thing for– And every country there are ideas that are being bandied about through reasonable debate. And eventually we assume that we’ll come to the better conclusion, because we debate, right?
Hugo: Yeah, yeah. So I mean, obviously the moral cases are always tricky, because it’s harder to tell what is right or wrong. I mean, so obviously there’s things like say where it’s easy for us to say, “Well look, that was wrong.” From our point of view, if you adopt some kind of like evolutionary point of view, in which case people should basically should be looking after their own welfare mostly. It’s not clear that reasoning even improves, even in the best circumstances when people are free to argue as much as they want and all that. It’s not clear actually to more – moral outcomes. It should just lead to outcomes that are better for the individuals who are discussing with each other, not necessarily for anyone else. What is nice about the history of the evolution of slavery and mostly, but I guess – and the one I know best anyway. The history of the evolution of the slave trade in Great Britain, is that arguments – I mean we know that arguments really played a critical role in the process. So people have argued, and psychologists and others have argued that – when it comes to big moral changes such as this, it’s mostly a matter of emotions and conformity. So basically everybody wants, kind of many people change their mind and other people will follow. It’s not necessarily clear why these people change their mind in the first place. But people tend to say, “Well no look, arguments don’t work for these sort of things.” And in fact they do. I mean, it’s particularly in that case anyway. And I believe in other cases as well. That people, like late 18th century or early 19th century in Great Britain – a lot of the population, and at least the most important members of the population in that case, they’re members of parliament. Were really convinced by good arguments that – that it was wrong. And also they were convinced that it was not necessarily economically the best thing to do. But really, arguments played a critical role in that, in that process. And so, I think we still have to be hopeful. And the base of change, I mean I’m sure that for people living at the time it seemed tremendously, I mean extremely slow. It took many years, and there was the Napoleonic Wars that intervened, that kind of screwed things up for a while. But apologies for that on the behalf of France by the way. But when you look at it, when we look at now, it does seem – seems to be pretty fast, given the scale of the change. We went from slavery being completely allowed in Britain in the late 18th century, to it being completely outlawed actually decades later. So obviously from the point of view of activists, nowadays, any change is going to be – appear extremely slow. But it doesn’t mean that there is no change. I mean, convincing millions of people is not going to happen very quickly. It’s not surprising. But it’s good to know that it can happen.
David: Yeah, it gives me hope. Your work specifically gives me hope that those YouTube comments that we were talking about–
Hugo: Yeah.
David: And that when people clash online about all sorts of different things. That there’s, that there is actually something positive happening that is invisible in the moment, but maybe over a long period of time, people being confronted– I mean, many of these people have never been confronted with an opposing viewpoint or a different viewpoint, maybe ever in their lives. And now, because they got online, they’re like, “Oh wait, people think things that I don’t think.”
Hugo: Yeah.
David: And maybe that’s – there’s a– Maybe there’s a positive thing happening there.
Hugo: I think there is. I, unfortunately I don’t know that literature well enough. But there’s a substantial [?] in political science on any kind of media studies and the effect of the internet on political beliefs. And it seems that contrary to a lot of popular opinion– So not only, I mean like for instance the American population is the best studied. But people are not actually much more polarized than they were 10 or 20 years ago. Politicians are much more polarized. But people haven’t changed all that much – most people anyway. And also the people actually, when they go online, they don’t only read things that are going to support their opinion. They also, as you are saying, they read things that go the other way. And even if they’re only papers that support their opinion, there would be the comments section with people from the other side commenting. And no, I mean it is– I mean I would say on the whole, the question is too often is to whether the overall effect is positive or negative. But I can’t, I can’t help that it could be really negative on the whole. I mean, given the previous improvements in communication technology, it always resulted in better beliefs on the whole. I mean it’s – it’d be– I mean, when printing came out, I’m sure there are people saying, “Well look, it’s–” And actually that was the case. So it’s actually a good example. When printing came out, a lot of people were worried by all the really, really bad publications that were circulated. There was a lot of astrology, a lot of superstition. They were being printed because people wanted to read that. And so, people are worried about printing. They say, “Well look, it’s really getting out of hand, and people are getting access to a lot of knowledge that they won’t understand. And the bad things are going to spread more than the good things.” And in retrospect, it all seems completely ridiculous. It’s like, seriously, how could anyone ever question printing as a way of improving knowledge?
David: Right.
Hugo: And I think the same thing. I mean I hope the same thing is going to happen with the internet. I mean, on the whole, it’s going to be a – to prove to be a force for the best. Especially, I mean, for us. Because it does allow people to argue in a way that printing doesn’t really. I mean at least with the interaction that we can have with a book – I mean, you can write a reply but most people are never going to do that. So by enabling much, much more argumentation, we should predict that on the whole, the internet should really be very helpful to make people change their mind and adopt better beliefs on the whole.
David: Well Hugo, this is a great place to stop, even though I don’t want to stop. Because I love talking to you about this topic. And I can tell you that your work is really going to – is really going to enhance everything that I do and how I think about things. Because it – you gave the opportunity to answer the question of “why?” Which is something that has been very difficult to answer. And I think you’ve moved in far as – in psychology and in philosophy, I think you have moved the conversation forward really well. And I think that’s fantastic and I thank you for it.
Hugo: Well thank you very much, thank you very much. That’s very, very kind of you. And it’s great for us to know that we’re having some kind of impact on people’s thoughts.
David: So if someone wanted to keep up with your work and find you, how could they do that?
Hugo: So, I’ve got a website, if you Google “Hugo Mercier,” you’ll find that website. And there is a news page, like with the new stuff that is sometimes updated. I’ve got 2 small children, and a lot of work. So it’s not updated as much as it should, but it is updated from time to time. And most importantly, we’re writing a book. So with Dan Sperber and my former a PHD thesis adviser and the co-author of that paper you were referring to. And we’re writing a book that is going to be aimed at a broader audience than standard academic writing. It’s going to be our first try for each of us at kind of slightly popular writing. So I don’t know how well we’ll do it, but we’ll try anyway. So at some point, that book will come out and hopefully people will – people who’ve been interested in the argumentative theory from the paper and press coverage it got will enjoy that.
David: Oh that’s great, I’m glad to hear that. Alright. Well thank you so much, it was great having you on the show.
Hugo: Well thank you very much David, it was great to – it was great to be on the show.
You must be logged in to post a comment.