Transcript: Interview with Julia Galef from Episode 073

This is the interview with Julia Galef from episode 073 of the You Are Not So Smart Podcast.

DownloadiTunesStitcherRSS – Soundcloud

julia-galef-headshotJulia Galef is the president and co-founder of the Center for Applied Rationality, a non-profit organization that training people and organization to make better decisions. She is also the host of the Rationally Speaking Podcast and has written for Slate, Science, Scientific American, and Popular Science.

David: This is going to come up, and people are going to sort of just drop it casually in conversation sometimes. I – like you, found that this was like the– This was the keystone that like made everything else make sense to me. I was like, “Oh God, this is extremely useful.” And very powerful. So first, people who have never ever, ever heard of what this is. What is this thing?

Julia: So Bayes’ rule, or Bayes’ Theorem is just a simple theorem from probability that tells you how a probability should change in response to new evidence. And if you want to make the small or large leap – depending on who you ask – from that theorem to a way of approaching a rule. You could rephrase the rule as saying how much – as being about – not just an abstract probability. But about how much your belief, how much your degree of confidence in some idea should change as you encounter new evidence that’s relevant to that belief.

David: Right and this sounds extremely complicated. And when you first look at it, it seems that way. It’s one of those math formulas that you look at and go, “No thank you.” But it’s not actually a very complicated math formula. It’s like – it’s just basically 4 parts. It’s a little, little division that – 2 things multiplied together over another thing. And then that equals out a certain probability. So it’s – but the purpose of it is – it’s a way of–

You can work out the math, or you can even draw it out on a piece of paper and graph it. But it’s a way to think about what happens when new information comes along and affects an existing idea or belief or probability. And you’ve said before – in your videos on YouTube that it made you realize that your beliefs– The theorem made you realize that your beliefs are grayscale. What does that mean?

Julia: Well, so I think everyone already has some intuitive understanding of beliefs being grayscale. Because we do apply that – we do use that framework for a lot of beliefs. Like if you apply for a job, and you’re waiting to hear back about the job – you get new pieces of information that might shift your confidence, that you did in fact get the job. Like let’s say you get a new piece of information that – oh – turns out they were really looking for people with the degree I have. And so, that might boost your confidence that you got the job. Then you might also find out that they had an unusually large number of applicants this year. And that might lower your confidence that you got the job.

So you’re not 0% sure or 100% sure that you got the job. But somewhere in between, and that level fluctuates. And this feels very natural, and just like what we would do by default, right? The reason that I think the concept of thinking in grayscale is important, is that we don’t apply that framework reliably or systematically to all of our beliefs. So to take something towards the other end of the spectrum, a lot of us have political or other ideological beliefs that we may – if asked, we may claim we’re not 100% certain. Although some people actually do. I’ve this experience many times. People will say, “No, I’m 100% certain that the Democrats are the better party,” or whatever. But even if you pay lip service to the idea that we’re not 100% certain, we don’t act that way. So we don’t actually let our confidence in the proposition that a Democrat would make a better President – fluctuate in response to new information or new evidence. The way that we would with the question, “Did I get the job?”

David: Yeah, and I’ve – the project I’m working on now, I’ve spoken to a lot of people who sort of switched their belief system from going one way to the other. And I feel like – it may not be true, but it seems that it’s true that we tend to have– We tend to consider our beliefs to be just true or false. Just they are. This is true, or it’s not true. And when you play around with the Bayes stuff, you start to – you start to see a new type of thinking tool manifest that suggests– Instead of thinking everything is either a zero or 100, that you think more in terms of – how confident are you, that what you– What you believe is true? And it is a different way of seeing the world, that I think might be not our default setting. What do you think about that?

Julia: Right. So actually this is important to bring up. I alluded earlier to this leap – small or large leap to go from this uncontroversial Bayes’ Theorem, to a kind of vision mindset or vision epistemology way of viewing the world. And one element of that leap is the idea that it makes sense, or it can be meaningful to talk in terms of your beliefs being probabilities. So like – so okay– I mean, rewind a little bit. If you roll a die, you can say there’s a 1 in 6 probability of it coming up 3. If you flip a coin, there’s a 1 in 2 probability of it coming up heads. And you could define these probabilities in terms of frequencies. Like 1 in every 6 rolls, the die will come up 3 on average. And that sort of makes sense. But it gets a little fuzzier when you’re talking about things that don’t have a well-defined probability distribution like a die roll or a coin flip. So something like– Would– Like when Hilary Clinton becomes President – assuming she becomes President – is inflation going to go up or down? There’s no real – like you can’t repeat that experiment again and again and again. Even if you could have Hilary be President 10 terms in a row – lots of things about the world have changed, and so it’s not really the same independent experiment – the way a coin flip is independent each time. So it doesn’t really make sense to define – to talk about, what is the probability that inflation will go up or down under Hillary’s Presidency. In frequentist terms, the same way you could define the probability of getting a 3 when you roll a die, in terms of frequencies. So you have to talk instead about your subjective confidence. Your credence -sometimes we say, in that proposition. And some people – some philosopher’s for example, aren’t willing to make that leap and say that you can talk about probabilities in that way. I think it makes sense. A lot of people think it makes sense. And one argument I think in favor of making sense is, you can cash out those subjective probabilities in terms of the odds at which you would be willing to bet. That’s a way to define what I mean by probability. Like I would accept 2 to 1 odds that inflation will go up if Hilary becomes President – that kind of thing. So anyway, that’s a premise underlying the kind of vision mindset or vision epistemology that I think is important to highlight explicitly.

David: The way this – the way this finally like totally snapped together in my mind, it was a – a thought experiment, and intuition pump that kind of floats around in the world of statisticians and skeptics and the like. Of the idea of, when you – there’s a.– If we have a quarter, let’s say – or you have a trick coin that has, that always comes up either heads or tails. But you don’t know which way it is trick – which way the trick is set up. And you flip it, and then you – it lands in your hand, and you put your hand over it. And you ask yourself, “What are the odds that it’s either heads or tails?” And obviously it’s always going to come up one or the other. But you don’t know which way it’s come up. And the idea behind the thought experiment is to sort of point out that the probability isn’t necessarily in the coin. The probability – probabilities are something that brains come up with to try to make sense of the world. Is that sort of a good way of looking at it?

Julia: Right yeah, probability is a feature of your state of mind and your degree of knowledge about the world. Not a feature of the world itself. So another – I like that thought experiment, but another way to get at it is if I flip a coin, and I look at it – but I’m hiding it from you. I’m covering it with my hand. And someone asks us, asks each of us, “What is the probability the coin came up heads?” We’ll have different probabilities, right? Your probability should be one half, ’cause you don’t know. And coins have a 50/50 chance of coming up either way. But for me – I know that the coins, they came up tails. So for me, the probability is basically 0%. Building a tiny, tiny bit of wiggle room in terms of maybe me misremembering or misperceiving the coin. And you have different probabilities, and that doesn’t mean one of us is wrong. It just means that we have different states of knowledge about the world, and different epistemologies.

David: And this, so this is such an important point to make. Because the way I made it make sense to myself, ’cause this is – I– Because I’m a dork. Is that I thought that – I was thinking about Data from Star Trek. Could probably see a coin flip, and know exactly what it’s probably going to land on – off of the initial conditions of the coin flip. The position of the finger, and the air movement in the room and the weight of all these– All these millions and millions of factors, that really are in place whenever you flip a coin – and also do affect that it’s not a pure 50/50, even in the real world.

Julia: Right. So there isn’t really – we don’t really have randomness in the world, except at the quantum level, I guess. We just have lack of full information.

David: So that means – so if I flip a coin in front of Data, he’s going to have a different probability than I’m going to have. Like you were saying, in your example. And – which means the probabilities not in the coin, the probabilities in our brains.

Julia: Right.

David: Okay. So and if the probability – and you were, the path you were about to take us down there is that– That means that we – we don’t have perfect knowledge of the world. And therefore, our probabilities right now – necessarily have to change when we get new information. They either have to go up or down. And this – oddly enough – is not kind of the way we think about belief. You had an example. I think an example that will work well for people listening would be the car crash example. Could you go through that?

Julia: Yeah. So, I think a typical thing that we might do if we got into a minor car accident – is – unless it was clearly, obviously our fault. Usually it’s more in a gray area. We would ask ourselves, “Well is there a story that I can tell, a plausible story about how this car crash was not my fault? Did the guy turn too quickly and not leave me enough time to brake? Or was that sign not visible enough?” And often there is a plausible story that you can tell. And if so, we feel like – well we’ve just found a story that allows us to maintain our level – current level of confidence in our driving ability, therefore we don’t need to update our confidence at all. And often the story will be true. Often it was not your fault. But you don’t know for sure. And the fact that you got into a car accident should– It does provide you some evidence that you’re less good of a driver than you thought you were. So probably at this point, I should explain the Bayesian definition of evidence, because that’s kind of crucial for making the case that you should update your confidence in response to this evidence. So the Bayesian definition of evidence is– It asks you to compare how likely or probable something is in one world, in which a hypothesis is true. Compared to a different world in which the hypothesis is false. So an intuitive way to grasp this definition would be– David, if I asked you, “Hey, I just got my hair done, how do you like it?” And you said, “Oh it looks good.” How much evidence does that answer provide for me about the idea that you actually think my hair looks good? Well not a ton of evidence. I mean, I don’t know you that well. But people in general, if you ask them, “How do you like my hair?” Most people will say, “It looks fine.” Or, “It looks good,” even int the world in which they don’t really think it looks good. And so, the fact that the – the evidence that they said it looks good, doesn’t really help you distinguish which of those worlds you’re in. Because that evidence would look the same. Probably look the same in both worlds. Whereas if you ask someone renowned for their honesty – like I have this friend, let’s call her “Jane.” Who speaks her mind, and if I ask Jane, “How does my hair look?” And Jane said, “It looks good.” That evidence is very unlikely to occur in the world in which Jane doesn’t think my hair looks good. Like it could still occur. Like maybe she just has recently resolved to try to be more tactful? Or maybe she has reason to believe that I’m in a particularly fragile state today, and she just – will make an exception to her usual rule of honesty. It could still happen, but it’s much less likely to happen in the world in which she doesn’t like my hair, than in the world in which she does. So that – but if that evidence occurs, that’s strong evidence for the hypothesis that she likes my hair. So that’s the definition, the Bayesian definition of evidence. Returning to the car crash example, the question that we should be asking ourselves after that car crash is – how much likelier is it for me to get into a car crash in the world in which I’m not a good driver, compared to the world in which I am a good driver? And I’ve made that a binary question, like good versus not good driver. Obviously there’s lots of different levels of good you could be. I’m just sort of simplifying it to illustrate the

example. But I think it’s pretty self-evident that people who are not good drivers are – on average – more likely to get into car accidents, than people who are good drivers. So the fact that you got into a car accident – even if there is a plausible story you could tell about why it might not be your fault. It should downgrade your confidence in the hypothesis, “I am a good driver.” At least a little bit.

David: Yeah, this is is – so this is a great example of how we are resistant to admitting that we’re wrong. And that’s – the necessary step to updating your beliefs and being correct. And if you – in this example – if you’re fiddling around with your smartphone, and you’re trying to fish a Dorito out from underneath seat, and then you hit somebody in front of you – and you do the typical thing that human beings do, which is hem and haw and hedge and justify, and do all these mental gymnastics, so that you can maintain that you are in fact a good driver. But this is an outlier case. This is an anomalous event that doesn’t count. That’s not the Bayesian way of looking at things. The Bayesian way of looking at things is to say, “Well this is new evidence, let’s apply this evidence to our existing hypothesis.” Or, if you have like a complete theoretical model, let’s apply it to that complete theoretical model, and see what happens. See how it adjusts the sliders. And you would have to admit. I mean this is – I mean – this is such a– It feels like this is such a great, a totally new way of looking at the world, compared to what you’re expected to do. And in this case, you’d have to admit that – my confidence and in my driving ability has gone down.

Julia: I mean, to be clear – I do think that we do instinctively apply Bayesian reasoning sometimes.

David: Right, okay.

Julia: Like in social situations, often I think we have an instinct for Bayesian reasoning. Although not always. It does go wrong sometimes. But in the case of like interpreting how likely someone is to be telling the truth – for example. Or how much you should update on their comment. I think we do have an instinctive – it’s just – it’s, like many principles of thinking – we just don’t apply them consistently. And having marinated in the Bayesian framework, I think just makes you more – you apply it more consistently.

David: It’s fascinating that you mention that, in social situations. ‘Cause you know in psychology – in many psychology experiments– I think one that comes to mind is the – oh man, what is it called? The test with the cards. I’ll have to look it up.

Julia: The Wason–?

David: Yes, that’s it. Yes, exactly.

Julia: I actually don’t how to pronounce that Wason, Wason.

David: Yeah, yeah. I have heard it Wason and Wason. Let’s just do Wason. Yeah, the selection test. The – which is you lay out 4 cards, and 2 have numbers and 2 have colors. Or sometimes 2 have letters and 2 have colors.

Julia: Yeah, I can explain the original test a little more if you want?

David: Yeah, go for it.

Julia: People asked to decide – to determine whether these 4 cards laid out on the table violate a certain rule. And the rule is something like, “All cards with even numbers on one side have a red dot on the other side.” Or something. And so some of the cards are showing numbers, some of the cards are showing dots. And people are allowed to turn over 2 cards, I think? I forget how many they’re allowed to turn over. But basically – people will always think to turn over the cards with even numbers, to check if they have red dots on the other side. But they often won’t think to turn over the cards that have say a green dot. Because if you turn over cards with a green dot, and it turns out on the other side is an even number, then that’s a violation of the rule. But that’s just not – that’s not the most obvious self-evidence thing to check, if it’s a violation. That doesn’t come naturally to people. But if you take the exact same structure of problem – isomorphic problem, but reskin it with a social context. So you ask people, “Hey, here are 4 people. Check to make sure that these 4 people don’t violate the rule – everyone who’s drinking must be over 21.” People do it then, they’ll check to make sure that people who are drinking are over 21. They’ll also check to make sure that people who are underage are not drinking.

David: Right.

Julia: Same problem. But in a social situation, we know what to do.

David: That’s such a great example. Because it totally shows what you were saying earlier. In one context, we go Bayesian, or in one context we’ll go with probabilistic thinking. And in another context, it just turns into math and it doesn’t make sense and I don’t understand. And we fail to follow through with seeking disconfirmatory evidence for our hypothesis, which often is the best path to finding the truth of what is and is not so about this particular assumption that we’re making.

Julia: Yeah, although I don’t think it’s the case that in all social situations – or even systematically in social situations – we apply, we instinctively apply Bayesian thinking. In fact, I have a recent example for myself, in which I was not being Bayesian, then I caught myself later. I was having a meeting with someone. A colleague of mine, who I’m going to call Bob. And Bob was complaining about this mutual colleague of ours, who I’ll call Amy. And for background, I had prior reason to believe that Bob was jealous or competitive of – with Amy. Like she had– I thought he might feel like she had sort infringed on his turf work wise. Anyway, so he was complaining about how she had been late to some meetings recently. And I’m sitting there thinking to myself, “Well, I knew it, just this proves I was right. Bob’s jealous of Amy, that’s why he’s complaining about her.” And then later, I thought to ask myself, “Well okay, how much evidence does Bob’s complaining about Amy actually provide for my hypothesis that he is jealous of Amy? So I did that thought experiment, of comparing these 2 hypothetical worlds. One which I’m right, and Bob is jealous of Amy. And one which I’m wrong, and Bob is not jealous of Amy. I asked how likely is this evidence – the complaining – to occur in each world. And I realized – even in the world in which he’s not jealous Amy, I would still think it was quite likely that he would complain. Because he doesn’t like when people are late to things. And also he complains a lot. So maybe it’s slightly less likely that he would complain in the world in which he wasn’t jealous. So maybe this provides a little bit of evidence for that hypothesis – but not much. Like I shouldn’t – I shouldn’t be updating nearly as strongly towards that hypothesis as I had been instinctively. Because I had just been asking myself, “Does this evidence, this complaining fit with my hypothesis of the jealously?” And it does fit, I just had neglected the fact that it also fits with the alternate hypothesis.

David: Yeah, there’s another example you give on one of your YouTube videos that I really like about the idea of – I think as a plumber or someone like that comes to your house, but they start snooping around in your other rooms. And so you think to yourself – is this person casing my house to rob me? And then–

Julia: Casing the joint.

David: Yeah, so is there a– How do you apply Bayesian reasoning to that situation?

Julia: Okay so at this point I should say that there’s 2 big parts to Bayesian updating. There’s your prior belief, your prior confidence. Like how confident were you in this belief before you got the evidence? And then there’s the evidence itself – how strong is that evidence? How much should you update away from your prior? So the plumber example is more a case of how it – it’s important to be Bayesian with respect to remembering your prior. So let me first give a classic example of Bayesian updating, and then I’ll tell the story of the plumber who came to our house. So this is a question that gets posed to doctors and doctors in training. And it’s a very simple question that doctors should be able to get right, but in fact a huge proportion of them get it wrong. So the question is, if someone comes in for a breast cancer screening– And your breast cancer screening test is – let’s say it’s 90% accurate. So 90% of the time when someone does have breast cancer, the test successfully says, “You have breast cancer.” And 90% of the time when they don’t have breast cancer, the test says – successfully says, “You don’t have breast cancer.” Someone comes in, they take the test. The test says, “You have breast cancer.” What is the probability that they actually do have breast cancer? And pausing it so people can develop their intuitive guess.

David: And there’s an important aside here, is that this is not just a text book example. But this is something that – in psychology experiments has been asked of doctors and people in medical school. And I’ll let you continue from the.

Julia: Yes, exactly. I mean, and it’s a real example, it’s not just a made up word problem. This sort of thing does happen all the time. So some people will say, “Well the probabilities 90%, ’cause the test is 90% accurate. What they’re forgetting is that there’s – the probability depends in part on how common is breast cancer, right? Like if only one in a billion people had breast cancer, then the fact that the test said you had breast cancer should still not – you should still not be that confident. Because it’s more likely that the test made a mistake, than that you are the one in a billion person, right? So that prior probability should factor in. And many doctors do – they do remember that. And so they try to sort of adjust intuitively. They’re like, “Okay, so the test is pretty accurate. But also like most people don’t have breast cancer, so it’s probably something like 70 or 80% likely they have breast cancer. The true probability is much lower. I don’t remember the actual statistics in the population. But let’s say 1 in 1000 people has breast cancer. And the test is right – or the test is wrong 1 in 10 times. It’s still much more likely that the test made a mistake, than that you have breast cancer. So this is an example of how we often don’t– We forget about the prior probability. We just focus on the new evidence. Like a social example, since those are easier to grok intuitively as if– I call my friend, she doesn’t pick up her call and she never calls me back. If I were a socially anxious person – hypothetically – I might intuitively feel like, “Oh God, she’s mad at me, she hates me, she doesn’t want to be friends with me anymore.” And yes, someone not returning your call is a little bit of evidence that they don’t like you, or are mad at you or something. But I have all this other prior reason to be very confident that she likes me and is probably not mad at me. And I’m implicitly neglecting all of that prior confidence that I should have, that my friend is not mad at me. So going back to the example of a plumber, this was something that happened to me in real life a few months ago. I was renting this Air Bnb with some colleagues of mine. And we hired someone to come, check out some plumbing issue. And while they were in the house, they – we saw them like looking into other rooms, that were clearly not the bathroom or kitchen. And also they just seemed like a little shifty. I don’t know? Who knows? We were maybe paranoid. It was a pretty bad neighborhood or something. And, so after they left, we were discussing amongst ourselves like – well – were they sort of checking out the other rooms to see what valuables there are? Because we had heard stories about people – like plumbers who are sort of in league with robbers. Checking out houses to see, “Is this worth robbing?” Etc. And then telling their friends, and then the robber’s coming back later that night or something like that. So we were feeling worried about this. And the way we– And we were wondering like, “Should we leave the house? Should we go get a different Air Bnb? And the way we made the decision was, we looked up base rates. Which is – sorry – another name for priors. Base rates on how common home robberies are in that area. And then we estimated, “How strong is this evidence? Like in a world in which we are going to get robbed, how likely is it that we would see this evidence of the plumbers looking in different rooms – compared to a world in which we aren’t going to get robbed, they’re perfectly honest plumbers. How likely is it that we would see them looking at different rooms?” And we decided that given the relative rarity of home robberies in that area – like prior, strongly be in favor of “we’re not going to get robbed tonight.” And the evidence being only moderate, not overwhelming. We decided like – on balance, we should not be that much more worried. We should not be that worried about getting robbed tonight.

David: This is – I love how – first of all, I love how insanely nerdy that whole thing is. Like, “We looked up base rates.”

Julia: Oh totally. It didn’t even occur to me as I was telling that story, that I was nerdy. Because we do that all the frigging time.

David: “We looked up base rates, and the–” I think this might be– I think that there’s– I want to talk a little bit about artificial intelligence before we part. But before we get to that, I think that this is a good moment to push away from the table, and just kind of look at this and think, “So, let’s say someone is listening to this. They’re not going to go look up the math, they’re not going to do test problems. They – what is – what is sort of the– What is the benefit? For a person like that, what could you tell them is the benefit of thinking in this way? And what is sort of the philosophical version of Bayesian reasoning that you can take with you as advice about how to deal with the world?

Julia: Well I’d say there are a number of non-mathematical principles that I use all the time. Like literally multiple times a day. That fall out of the math of Bayes’ rule, and don’t require you to flip numbers into a formula, or look up base rates on the internet or anything like that. One of them is thinking in grayscale, that we discussed already. In which, sometimes you have to just remind yourself like, “I should not be perfectly confident in any belief. I should have varying degrees of belief. Varying degrees of grayscale.” From zero – from white to black. And when I encounter new pieces of evidence, like new information or even new experiences – I get – I find out that someone else has a certain opinion. All of that is evidence, in the sense that it is at least somewhat more likely to occur in a world in which my hypothesis is true, than in a world in which my hypothesis is false or vice versa. And so, I should regularly – or ideally – constantly be updating my – the confidence that I have in that belief. I should be letting that belief fluctuate. And most evidence is not going to be definitive one way or the other. Most of the time, even if – the economy is doing better under a Republican President. Or I find out that some smart person, whose judgement I respect, turns out they’re actually Republican. Like both of those things, I think are at least some evidence for the like Republican ideology being better for the country. A little bit of evidence. And I should acknowledge that. Because it might be that over time, enough such evidence will accumulate, that I should zoom out and say, “Okay, on balance, I am significantly more confident in the Republican platform, than I was 2 years ago. But I’m not going to get to that point, unless I have let in all of that small to medium size evidence.

David: And it’s also important, I think to take into account that when we – we gather evidence by testing reality. And sometimes we do that with our very imperfect senses, that go through a whole chain of imperfect, cognitive processes. And sometimes we do it using devices and tools and things. So like, in the cancer example, you’re using some sort of technology – either a machine, or some chemistry. And when you test reality, the test itself has– It – how accurate the test is, changes how we make our– How we update our probabilities as well. ‘Cause the – the – what actually is true, never changes. No matter how good or bad the test is. So – and the probability itself is in the brain, not in the external world either.

Julia: Right. Well, what actually is true does change over time. Like this is one of the reasons why social science is so hard, right?

David: That’s true…

Julia: The reality of how people interact with each other and their environment etc. Actually change– it’s really hard.

David: Yeah that’s totally right, yeah that’s true. And that does make things more difficult. But let’s say you’re trying to determine – if you’re talking about like celestial bodies out there that last for billions of years. The tests we use to sample that aspect of reality, will affect our confidence. And sometimes it’s the test that also has to be considered, whenever you’re making this analysis. So it’s a – and I would urge people, ’cause – to sort of piggy back off of what you just said. Since the social world is in flux – and it is a lot of interpretation, and a lot of variables. Not only is that a moving target, but you’re – the way that we typically sample that world is extremely imperfect. So you add those 2 together, and it’s having that black or white view of things is really, really a poor way to create models of reality compared to this Bayesian way of doing things. Which is to update your hypothesis every single time you get what appears to be new evidence. And that’s a – for me, that’s a– That is a completely different way of looking at the world. And I find that I’ve met many people who are – kind of seem to be either on one – either in one way of thinking or in the other. They’re in the world of certainty, or they’re okay to live in the world of uncertainty. And the – a lot of the problems that we deal with, and a lot of the arguments that we have with people on the other side– I think – a lot of times it comes down to people who live in that world of certainty, versus people who live in that world of uncertainty on that issue – on that specific issue that we’re talking about.

Julia: Yeah.

David: Do you find that to be true?

Julia: I want to add one caveat, as usual. Which is that – I’m all for acknowledging uncertainly. And I think in general, people are more certain than they should be about things. But there is a failure mode here that someone – might even have been [?] or Bukovsky – termed “The Fallacy of Gray.” Which is – it’s a mistake to think that things are all black and white. But it’s also a mistake to think that everything is the same shade of gray. So – and I find this a lot, often as an excuse – when people don’t want to acknowledge evidence. Like in cases where the evidence is pretty strong towards one hypothesis, but people don’t really want to acknowledge that. They’ll just say like, “Well, we just can’t know. It’s just uncertain.” Like I – that evidence could be flawed. You just – maybe – like it could be. We don’t know. And so there is a kind of discipline involved in really asking yourself, “Okay, what would I honestly expect? How – honestly, how much would I expect this evidence to occur in one world, versus another? You’re really asking yourself that question, instead of just defaulting always to – it’s uncertain.

David: It’s hard being a person.

Julia: It is hard being a person. And I realize I’ve made this sound very taxing, to be constantly updating your beliefs. Most of my beliefs – I have millions of beliefs probably, if you get down to the level of like, “Is Anna in the office today?”

David: Right.

Julia: “Are we going to have Indian or Thai food for lunch today?” Like probably millions of beliefs at that level or bigger. I’m not constantly updating the vast majority of them, and I think that’s fine. This is more of just a framework. It’s a good – it’s a good rigor to have, train yourself in. But I think can sometimes be impactful in day to day life, but is – becomes more important when you’re dealing with like medium to large size decisions. And I think it helps to have like, trained in that in general, in order to think correctly about the larger decisions.

David: Yeah totally. If you think of it as like, “Let’s pull out the Bayesian engine, and crank it up. Start looking at this problem. There are certain that – if a social situation or a legal situation, if a piece of legislation could – harms people or affects people’s lives and their livelihoods and their children and their income and all these things. It’s a good time to bring in – let’s do the thing where we look at evidence. Let evidence update our beliefs. And if it’s a matter of raw science, or if it’s a matter of – something that has an impending doom on our civilization, like climate change – that sort of thing. That’s the good time to bring it out. But yeah, we’re talking about– It is extremely – there’s a reason why the brain doesn’t do this all the time. ‘Cause it is extremely cognitive. It is extremely taxing cognitively to do this. And so that’s a good segue to this last thing I want to talk about before we part ways today. Is that – one of the things that actually has helped us kind of get a better grasp of how we believe, and what belief actually is neurologically? Is this – is messing around with artificial intelligence. And I’ve actually spoken to a couple of people for the book project I’m working on about artificial intelligence. And also met the people up at MIT that have – that do LifeNaut, which is the uploading your brain to the internet when you die, to hopefully live forever. And all of these – those people, and the people who are working on some of the hard problems of artificial intelligence – all have, are all dealing with belief in a way that social sciences not necessarily– in a way that social sciences may not necessarily be working on those problems. ‘Cause they’re kind of doing it from a reductionist bottom up kind of approach. And you have spoken about this in the past. And one of the things you – one of the revelations that came out of artificial intelligence work that you talk about, is the idea that the brain is probably not a Bayes’ net. So what is a Bayes’ net, and why is the human brain not one of those things?

Julia: Well a Bayes’ net is just a way of representing relationships between variables, or between probabilities, you could say. So it’s not – like it’s just a mathematical or statistical concept. But it has been useful in a lot of artificial intelligence algorithms. So an example of a Baye’s net would be – you have 3 variables. One of them is, “Did Bob get into Harvard?” Another one is, “Does Bob have a very high IQ?” And the last one is, “Is Bob a legacy at Harvard? Like did one of his parents or grandparents go to Harvard?” Which is something that – schools like Harvard use to decide whether to admit you. So you have these 3 variables. And they’re interdependent in the sense that – if you found out that Bob got into Harvard, that should increase the probabilities that you put on both the variable, “Does Bob have a very high IQ?” And also the variable, “Is Bob a legacy at Harvard?” And then – again – if you find out that Bob is a legacy at Harvard, that should decrease the probability that you put on Bob having a very high IQ – somewhat. And that is not because being a legacy makes you dumb. It’s because if you are a legacy, then it’s less necessary to have a high IQ, or a very high IQ to get into Harvard. So you should have less confidence if that was the case for Bob. So in a Bayes’ net, the way it’s implemented in AI is – when you change one variable, like you change the probability that you have on one of those variables. That change just automatically, instantly propagates throughout the network, to all of the probabilities that are related, right? You change the probability on the “Got into Harvard,” node. And then the probability is on the IQ, and the legacy nodes instantly change in response. And in this case – as in many cases, I think it’s instructive to just compare how this process works in an AI, versus how it works in a human brain. Sometimes the human brain will automatically change related probabilities. Like if the probability I put on the node, “Is Thai Delight Restaurant open today for lunch?” Which is the place I go for lunch. Like if that changes, because I see a sign saying, “Closed.” Then pretty automatically, the probability that I am putting on, “Will I be eating Thai food today for lunch?” Will change as well. That’s pretty automatic. But it often doesn’t happen. Like the change often doesn’t propagate automatically.

David: Yes, this is the thing. Okay, I mean – you totally blew my mind. When you – when I listen to you talk about this – I actually have in front of me, like these ridiculous like Nash, Beautiful Mind scrawls all over this piece of paper. Because I need it–

Julia: Oh, should I call for help?

David: No, I need it. But I needed to– This was something that I desperately needed to make sense of something that I was witnessing interviewing people who have somehow– In the process of doing a very strong core belief change, it was bothering me that people – that this– The propagation across the network that you talk about. It wasn’t happening. ‘Cause it was like, this doesn’t make any sense to me. ‘Cause there’s – all these nodes are connected. If you think about – 1 example would be like, someone believes the– Deeply believes, has deep religious beliefs. Believes God, the Christian Abrahamic God is a real dude with a beard, floating on a cloud. That whole thing. And then they also believe homosexuality, the LGBT behaviors all are bad and are sinful and evil. And then they – these are 2 nodes. And now our 3rd node might be, “Should same sex marriage be– Is same sex marriage good or bad?” And they think that – they have at least 3 different beliefs. And they change their belief about same sex marriage, or they change their belief about LGBT individuals. And those 2 probability– Those 2 belief nodes are linked in a way that the probabilities move around. But sometimes not. And then – what I see is that each one of these nodes is completely independent. In fact, if you completely lose one of these nodes, if you become an atheist or you decide same sex marriage is okay. Or if you decide the LGBT individuals are not abominations, none of those will necessarily affect the other 2 nodes. Like you can keep beliefs that seem to be necessarily connected to other beliefs. And that didn’t make sense to me, until you laid out the fact that the brain is not a Bayes’ net. We don’t – that’s not how human beings update their vast collection of beliefs they use to make sense of the world.

Julia: Right. And because it would be very computationally expensive, right? Like every time  you get new information, to have to ask yourself – not only, “Should I update the probability about the directly related belief? Like, “If Anna, my colleague looks upset today–?” Should I update my belief not just about, “Is Anna upset today?” But also about, “How upset is Anna in general?” And also, “How upset are people in general?” And also, “How good am I at reading people in general?” Like these are all related beliefs, right? And I’m sure I could come up with thousands more that are at least a little bit dependent on the question, “Is Anna upset today?” But it would just – it would be paralyzing to be always thinking about all of those beliefs. And so, the brain would never get anything else done.

David: Yeah, and this is – this to me – sorry to interrupt you there. But this – to me – this is like one of the greatest frustrations that I’ve had, arguing with people both in the– In Meet space and cyberspace. I have had this deep frustration, where you feel like – if you can just get somebody to see this one thing, then the rest of their – everything else will naturally follow.

Julia: Right, logically – it should.

David: Yeah, I see that. I see that constantly. Like right now, there’s a lot of talk online about these new laws that have come up. These religious sort of freedom laws, that make it where people can discriminate against LGBT people. And I keep seeing these– Especially on Facebook, I see these arguments where people will be like – they will try so hard to catch somebody in a– And obvious contradiction in the raw text of the Bible. Like, “Well, you shouldn’t– It also says you shouldn’t wear this. And it also says you shouldn’t eat this.” And I can see that the goal of that arguer is to catch that person in that contradiction. And they feel like it’s going to be this cascade domino effect, where they’re going to be like, “Oh, you know what? That is a contradiction, maybe this whole thing isn’t right? Maybe I’m not right about this?” Maybe I should delete my belief in my religious convictions, and then that means – obviously – that same sex marriage is okay, and this law is bad. And you could feel there’s like – this person’s trying to attack one node, which the intention of sabotaging the whole network of nodes.

Julia: Right. That’ll be the Jenga block that I have to pull to–

David: The Jenga block, yes. That’s such a great way to look at it. But that is just simply not how brains work, it seems.

Julia: Yeah and I keep this principle in mind. It’s not that I’m trying to be as close to an AI as possible. So I’m trying to constantly update all of my related beliefs. It’s more that like, when something – when a belief comes up as being – something depends on this. Like we have to make a decision based on this. Or someone disagrees with me. And so, all of a sudden I’m like, “Hmm, why do I believe this?” I now, it’s instinctive for me now to look for the source of that belief to see if that’s still relevant or if I still endorse that source. Or if that source is long since defunct, it’s long since been deleted – and that I just had never noticed that I should delete also, or significantly downgrade my confidence in this related belief. So it’s just a – a thing that I check frequently, that I didn’t check before I had this concept.

David: Everyone is going to want–