Your Brain at Work

The Neuroscience of Cognitive Bias

Episode Summary

Uncertainty surrounds the future of DEI. Many organizations plan to continue their DEI work, yet they’re likely to face a number of challenges along the way. To be successful as we evolve our work, it's important to understand why DEI efforts became a priority in the first place, and whether they’re important enough for organizations to address in other packaging. What are the short- and long-term impacts of rolling back the work? Although DEI programs are sometimes politicized, limiting their effectiveness, there are several reasons why organizations need to address certain quirks of the human brain to achieve their best. Join Drs. David Rock and Emma Sarro as they discuss the core brain processes that make DEI work indispensable. A focus for this discussion will be our cognitive biases — mental shortcuts we’ve evolved to make decisions quickly and efficiently. Unfortunately, cognitive biases can also cause us to make poor decisions that negatively impact our employees and business. We’ll explore the major biases that affect our work and how to mitigate them to make better business decisions.

Episode Transcription

WEBVTT

 

1

00:00:04.150 --> 00:00:09.260

Emma Sarro: Hello! Welcome! Welcome to another week of yball.

 

2

00:00:09.450 --> 00:00:11.349

Emma Sarro: See, everyone's joining

 

3

00:00:11.420 --> 00:00:38.569

Emma Sarro: if you don't recognize me. My name is Emma Sorrow. I head up the research team here. I'm standing in again for for Erin, who is taking some much needed time off today. She will be back soon, though, so I'll let everyone continue to join. If you know us. You know we love chat. So please drop in the chat box where you're coming in from today. I'm here from just outside of New York City. It's still nice here.

 

4

00:00:38.630 --> 00:00:42.280

Emma Sarro: I think, next week it's not going to be as nice. But

 

5

00:00:42.450 --> 00:00:45.459

Emma Sarro: love seeing this in the chat. Thank you all.

 

6

00:00:46.780 --> 00:01:09.540

Emma Sarro: all right. So again, for anyone who doesn't know me. My name is Dr. Emma Saro. I am the senior director of research here at Nli. I've been here for almost 4 years, and we're happy to have back all of our regulars, any of our newcomers. Welcome for the 1st time we're excited to have you here with us again. We love hearing where everyone's coming in from. So please continue to drop that in the chat

 

7

00:01:09.650 --> 00:01:28.060

Emma Sarro: and get comfortable. So today we're going to be discussing an issue on many of our minds. The future of our De and I work the short and long term impacts of rolling back the work and focusing on really the core brain processes that make de and I work indispensable to organizations really focusing on what

 

8

00:01:28.060 --> 00:01:42.010

Emma Sarro: some call the habits of thought or our cognitive biases. And so we're going to be kind of splitting this up into pieces. We're going to have more of a high level discussion. And then I'm going to dive into some of the recent research that really pulls this work into organizations.

 

9

00:01:42.050 --> 00:01:42.870

Emma Sarro: So,

 

10

00:01:43.780 --> 00:02:08.010

Emma Sarro: we suggest for this. Try not to be distracted, put on, do not disturb. Quit your email messaging app. So you can really get the most out of today. I turned my phone off. I took my watch off. So I'm not getting any kind of buzzes. It definitely helps. But we do love interaction. So please open the chat, open the Q. And a. And drop your thoughts or chats in here. So I'm going to introduce our one speaker with me today. You all know me.

 

11

00:02:08.009 --> 00:02:30.999

Emma Sarro: I'm excited to welcome back. You know him well coined the term neural leadership when he co-founded Nli over 2 decades ago. He's got a professional doctorate 4 successful books under his name, and one in the works, and a multitude of bylines ranging from Harvard Business Review to the New York Times, and many more welcome back to our co-founder and CEO of the Neur Leadership Institute. Dr. David Rock.

 

12

00:02:31.330 --> 00:02:35.379

David Rock: Great to be here with you, and what a timely and important conversation.

 

13

00:02:35.380 --> 00:02:44.590

Emma Sarro: Yeah, absolutely. And you know, we scraped by while you were away. But we're we're happy to have you back. Hopefully. It was enjoyable for anyone who who joined the last couple of weeks.

 

14

00:02:44.690 --> 00:03:13.360

Emma Sarro: So what we're talking about today, it's so like, as I said, we're going to be doing things a little bit different. We're going to start a bit high level, de and I in the world today, really focusing on our journey to understand some of what we call the quirks of the brain. Not all the processes really help us in every situation, even though they are evolved meaning they helped us survive. But in today's world some of these processes can really cause us in this case to make decisions or act in ways that don't always support our work.

 

15

00:03:13.590 --> 00:03:24.250

Emma Sarro: So just to pass this over to David, thinking about de and I, why will de and I, or should de and I continue to exist in some form.

 

16

00:03:24.950 --> 00:03:42.250

David Rock: Yeah, it's a great question. And you know, there's a perspective that as a social issue. And you know, if you're someone who feels that you've been treated unfairly. Then, of course, you're passionate about Dei. There are a lot of folks who are big supporters of that mission.

 

17

00:03:42.310 --> 00:04:10.889

David Rock: and for a lot of people. It feels like a social issue. And it is a social issue. Right? There's a big percentage of the issue is a social issue. There's a separate issue. It's connected. But it's different. There's a separate issue, which is that there are quirks in how the brain functions that actually makes it really important to keep addressing this issue. And if you're not like in a competitive organization, you're not necessarily trying to innovate.

 

18

00:04:11.000 --> 00:04:38.870

David Rock: You've got a market hegemony. You could just do whatever you don't need everyone collaborating, really well, innovating to the best of their ability. You don't really, maybe need to an extent. But if you're a company that's competing in tough markets. You need to innovate intentionally. You need to have the best possible talent, the smartest possible teams. Then everything points to this being really really important for that. And

 

19

00:04:39.304 --> 00:05:06.500

David Rock: it's it's so. So there's a very clear business case for why Dei generally is a good thing, which is, you know, better innovation. Better go to market strategies, better talent, pool. There's all sorts of things right? You can literally spend 3 h walking through the very clear evidence that it's a good thing, and then you start to wonder. So why don't we do it like what?

 

20

00:05:06.500 --> 00:05:09.640

David Rock: Why isn't it just kind of a fait accompli, that

 

21

00:05:09.640 --> 00:05:21.010

David Rock: that there is rich diversity and a rich sense of inclusion. And when you, when you dig into that, it's it's a separate issue, and it. It gets a little bit complicated. But the bottom line is that

 

22

00:05:21.496 --> 00:05:24.990

David Rock: when you're in a diverse and inclusive team.

 

23

00:05:25.461 --> 00:05:32.119

David Rock: You actually are more innovative. You are actually smarter like the IQ of that team goes up.

 

24

00:05:32.575 --> 00:05:44.320

David Rock: You, you actually perform better, right? You're more and interestingly, that's across all 3 things that people do find problems, solve logical things and be creative. Right? So that diverse and inclusive team

 

25

00:05:44.570 --> 00:05:52.680

David Rock: categorically like take to the bank, 100% is smarter. Here's the problem. It doesn't feel like that. It actually feels the opposite

 

26

00:05:52.770 --> 00:06:18.250

David Rock: right? So this is something we published in Hbr a few years ago as a piece called Diverse Teams feel less comfortable. That's why they perform better. And the discomfort in that team is actually is necessary. It's necessary that you actually have people a little bit uncomfortable. And so what happens? You know, I sort of put all that together in my head. And I go all right. So basically, if we don't keep doing the good work of Dei.

 

27

00:06:18.508 --> 00:06:27.289

David Rock: we, we build homogeneous teams because that's who we like to go out for dinner with people who think like us right? That's who we like to work with people who think like us.

 

28

00:06:27.480 --> 00:06:39.419

David Rock: We don't wanna have to work hard to explain ourselves or understand each other right every day. We just want people who understand us. So we put homogeneous teams together, and as a result you get much worse

 

29

00:06:39.550 --> 00:06:51.819

David Rock: innovation. You know all sorts of things. Right? So so basically, there's a there's a mechanism of the brain at the heart of this in a very simple terms. We equate cognitive effort with threat.

 

30

00:06:51.900 --> 00:07:15.639

David Rock: And if you imagine we most of us barely live to 20 for millions millions of years, let's assume evolution is real. We live to 20, you know, if that but through war and famine and thinking, like using your prefrontal cortex requires a lot of effort and energy, and we're essentially all excited when we find this little shortcut that says, Oh, I don't have to think I can cut and paste

 

31

00:07:15.850 --> 00:07:23.239

David Rock: this paragraph from this other project. I did right. It's intrinsically rewarding right? The brain fires up when we don't have to think

 

32

00:07:23.520 --> 00:07:49.879

David Rock: opposite's true. The brain gets into threat state when we have to think right, and it's treated as something bad. So a lack of fluency in any domain is tagged by the brain as bad thing, so essentially diverse and inclusive. Teams don't have as much kind of fluency like it doesn't just flow easily. Right? You have to think. And so people equate that with bad, the brain equates that with bad. And we build homogeneous teams. So if we don't push against that with

 

33

00:07:49.960 --> 00:08:12.680

David Rock: habits that people have to build and systems. Right? Then we end up with homogeneous teams. And like, I said, if that doesn't matter, if homogeneous teams who aren't innovating doesn't matter, maybe that doesn't matter in your organization, and we're back to deciding whether it's a valid social issue or not, which I'm definitely not going to debate because I'm absolutely biased. I absolutely believe it's a valid social issue. But on the science side it's a whole different kind of debate.

 

34

00:08:12.960 --> 00:08:34.649

Emma Sarro: Yeah, so interesting and really interesting to think about. You know just the fact that over time we might see these these long term effects kind of roll out, and we start to kind of feel them. Because right away, I mean, if we start to place less focus on de 9 companies start to cancel their work immediately? Will we really feel it, or does it take a while for it to kind of come out.

 

35

00:08:34.659 --> 00:08:57.389

David Rock: One of the challenges with this is, I think there won't be a lot of short term effects, with the exception of companies who, you know, publicly state they're not supporting Dei, and you know, you see the difference in Costco and and target in terms of you know, literally, you know, Costco's got all this more business, and Target's got, you know, empty stores because of public statements. So you'll see some short term impact from that. But you know, aside from sort of people voting with their feet.

 

36

00:08:57.389 --> 00:09:10.089

David Rock: you won't see a lot of short term impact of this because this isn't short term effects, right? Maybe the short term effects will be people saying, I'm enjoying coming to work more because everyone understands me. I don't have to explain myself right.

 

37

00:09:10.089 --> 00:09:12.429

David Rock: but it's it's it's not

 

38

00:09:12.840 --> 00:09:36.269

David Rock: you know. It's it's it's problematic in the medium term already, and in the longer term. And it it immediately starts to be an issue. That you'll start to see. And you know the pace of innovation. I mean the pinnacle of this right now is AI innovation, right? And you know the Chinese coming up with a radically faster, cheaper way of you know, building whole models is really like

 

39

00:09:36.519 --> 00:09:47.949

David Rock: created havoc in, you know, in in, not just like, like in the entire industry of technology. Right? So this is the power of innovation. Innovation really really matters, particularly in this digital world.

 

40

00:09:48.109 --> 00:10:07.239

David Rock: And so speed of innovation, which comes from really good debate between really smart people and challenging each other. And you know it doesn't come from from everyone saying yes to each other. So I think medium to long term, we're going to see pretty meaningful impact, but short term we might not. And that's a little bit of a trap.

 

41

00:10:07.480 --> 00:10:26.990

Emma Sarro: Yeah, so interesting. And and right, it takes intention. So especially in a time when we're rushing to innovate and rushing, to make decisions quickly and rushing to stay ahead of the game, we're less likely to pause and be intentional about who's on the team and who's making the decision and how the decision is being made. So it's almost like we're compounding the effect.

 

42

00:10:26.990 --> 00:10:56.500

David Rock: Exactly. Yeah. Bias is even more present when you're rushing. Bias is even more present when you're rushing. And you know, I was thinking about quirks of the brain, as you know. You label them as kind of quirks. There are these like quirks of the brain, and this this one is, you know, this is one that basically we equate effort with bad right? We feel like effort is bad. But I'm working on a new book at the moment, basically really building out the scarf model at a whole different level.

 

43

00:10:56.500 --> 00:11:06.139

David Rock: It's it's just incredible how relevant it is to so many things. And and I'm I'm sort of at this point of sort of summarizing, and I think that the whole world is

 

44

00:11:06.260 --> 00:11:25.740

David Rock: struggles with 3 fundamental quirks of the brain, that we just don't have much capacity for an AI is trying to solve the 1st one. And it's basically limited capacity to hold information in mind, right? So limited prefrontal networks and bias is an outcome of that right. Bias is an outcome of basically limited capacity to think well.

 

45

00:11:25.740 --> 00:11:44.290

David Rock: right to think deeply. And so we sort of have to go on biases. But that one issue like explains a huge amount of the world, basically limited capacity. The second is basically limited self-regulation which is impulse control. And again, that explains a huge amount. And then the 3rd is

 

46

00:11:44.290 --> 00:12:12.690

David Rock: challenges of understanding people. Basically, you know, the challenges of social cognition is, you know, trying to actually understand other people, as you know, different to you. And I think you know, limited capacity, limited self-regulation, and limited social cognition are 3 things that I think you know we all struggle with, and bias is an outcome of the 1st and gets worse because of the second, and also links to the 3.rd So I think there's, you know, it's this really interesting, interesting challenge.

 

47

00:12:13.510 --> 00:12:29.579

Emma Sarro: Yeah, absolutely. So this challenge, right? We were deep in it. We've been understanding it and working to research it for years. But how do we initially become interested in this? How did Nli? Because it wasn't initially De and I? It was initially the quirk of the brain.

 

48

00:12:29.580 --> 00:12:46.220

David Rock: Yeah, I never set out to build a Dei practice. I didn't at the time didn't know much about Dei practice. So we started researching this nearly 15 years ago, and we published our research on bias literally 10 years ago, in 2015. But we did not set out to build a Dei practice at all.

 

49

00:12:46.260 --> 00:13:07.329

David Rock: And I'm passionate about the topic. I have all sorts of personal connections to it, but it's not what we try to do. What we basically did is very scientifically every year we would ask heads of talent and Ceos and Chros like, what's keeping you up at night. And we would systematically synthesize that data every year for quite a long time.

 

50

00:13:07.682 --> 00:13:18.320

David Rock: To to find the most important things to work on. Right like we've got researchers. What should we study? And this, the issue of bias came up as actually a really really big outlier

 

51

00:13:18.450 --> 00:13:32.529

David Rock: nearly 15 years ago, and the conditions that it met was really interesting. It basically, it was a big problem. It was likely to get bigger, based on trends. A lot of resources were being spent on it, and nothing was changing.

 

52

00:13:32.710 --> 00:13:56.539

David Rock: And so when you sort of put those conditions together, you don't actually get that many things like that like, it's a big problem, really, really big problem getting bigger, lots of resources thrown out, and nothing seems to change. But there's another really fascinating thing which we'd never seen before, which was that the literature at the time, and still pretty much now said that there is no way of making people less biased.

 

53

00:13:57.084 --> 00:14:01.579

David Rock: Basically, the literature from like Wikipedia all the way through to Academia.

 

54

00:14:02.530 --> 00:14:06.570

David Rock: literally said, This is just who we are. We can't do much about it.

 

55

00:14:06.640 --> 00:14:22.900

David Rock: And so we thought that was a wonderful challenge. We thought we could solve that in a year and 4 and a half years later we finally published something. But that's kind of how we got into it. And and after we started really doing something meaningful about bias and showing real impact

 

56

00:14:22.900 --> 00:14:42.610

David Rock: companies started calling us and saying, Can you help us with inclusion. And so we did a couple of years research on that. And then people started saying, Can you help us with speaking up? And we did some research on that. So we basically started with just this is what the data says is a, really, you know, it's a sort of negative Unicorn. It was a very big outlier

 

57

00:14:42.610 --> 00:14:49.960

David Rock: in many, many ways, and it was literally a 4 and a half year research project, incredibly complicated to pull it together.

 

58

00:14:50.140 --> 00:14:59.570

Emma Sarro: Yeah. So so what's different about trying to solve this issue, then, was it just that it was like, it's so widespread, ubiquitous across all people.

 

59

00:14:59.740 --> 00:15:14.739

David Rock: No, it's just. It's just in a tall. It's it's in its own category. Right? So if I say to you, let's you know, let's like work on how to be a better presenter, for example, right? It's probably a bad example, because get to Meta. But I said, let's work out, you know.

 

60

00:15:14.850 --> 00:15:32.890

David Rock: Notice like the energy level you have. Notice how many stories you tell. Notice what you make eye contact with. All right, like I'll give you a bunch of things to try, and you'll notice them right when you go to present next. Right? So so like pretty much everything we do in leadership development. We give people like

 

61

00:15:33.120 --> 00:15:54.870

David Rock: in essence. We think of them as if, then plans. If you need to do this, then do this instead. But you can notice the opportunity really, really, clearly, right? So almost everything in leadership development, you teach people things. And they notice in real time that they're doing the wrong thing, and they should do something different. It doesn't mean they always do it right. That's not the case with bias.

 

62

00:15:55.000 --> 00:16:01.389

David Rock: with with bias. It's so fascinating. It's like trying to make it conscious, basically doesn't work

 

63

00:16:01.880 --> 00:16:21.569

David Rock: right? Whereas most things you try to make people conscious of it. Yeah, absolutely. If you say to someone you know. Try and count how many times you say in a sentence, you know, in a presentation they'll be super conscious of that. Everyone now listening is like super conscious of that for the next week. But bias doesn't work like that. And what's interesting

 

64

00:16:21.570 --> 00:16:34.910

David Rock: is it's not an awareness of bias problem, because you can know everything about bias. And you still don't know. It's not a motivational issue, because people, you know, can really really try. And it's not an intelligence issue. In fact.

 

65

00:16:35.431 --> 00:16:38.030

David Rock: more intelligent people probably have more biases.

 

66

00:16:38.469 --> 00:16:43.220

David Rock: So it's not those 3 things. What is it? It's basically a cognitive constraint.

 

67

00:16:43.430 --> 00:16:55.770

David Rock: meaning. You just can't do it, no matter how hard you try, how much you understand it and how motivated you are right. You only see a fraction. You do see some, but you only see a fraction of the biases you have.

 

68

00:16:55.980 --> 00:17:06.710

David Rock: and it's not that hard to explain this. When I explain this to leaders they're like, Oh, that makes total sense, I'm like, can you? Can you add up 56, and 79, and multiply 12 by 8 at the same moment.

 

69

00:17:06.790 --> 00:17:32.010

David Rock: right in the same like seconds. They're like, no, definitely not. What if I paid you a million offered you a million dollars to do it? Could you do it? We'd scan your brain, we'd know for sure, like. No, I can't do it. You can't, you physically can't. It's the way our attention works. You can't do 2 active working memory tasks at the same time. Now, right now you're all doing thousands of tasks at the same time. None of them will require working memory.

 

70

00:17:32.070 --> 00:17:52.389

David Rock: keeping your heart beating, breathing, staying, seated, etc. Etc. Right when it comes to conscious tasks, our working memory, which is back to that 1st quirk right? Our working memory is really, really limited. And while we're working on one thing like, Should I hire this person or this person, we have no extra capacity to be this like, I wonder if I'm being biased.

 

71

00:17:52.550 --> 00:18:10.770

David Rock: our brains fully immersed in? Do I hire this person or this person? So after the fact. Sometimes you can look back and see bias right sometimes. You still won't see a lot of it. If you take a break from the thought, you'll see more. But what's interesting is, you can see other people being biased in real time.

 

72

00:18:11.195 --> 00:18:25.869

David Rock: Because your brain's not doing the decision making. You can see the patterns you'd be like. Oh, look at that person they just assume like, look at that person assuming that and not double clicking right? So bias is very, very hard to do something about to yourself.

 

73

00:18:26.588 --> 00:18:35.230

David Rock: And this was our big insight is we basically gave up on trying to make people less biased. And we said, What if we could make teams less biased?

 

74

00:18:35.460 --> 00:19:04.240

David Rock: And we realized actually, that was a huge breakthrough idea. But the way to make teams less biased is they needed a shared common language that was really sticky. And so it wasn't about awareness. It wasn't about mindfulness. Yes, mindfulness slightly increases bias detection, but not meaningfully enough to actually do it. So that was, that's the big, big, big difference is it's not something you can increase self-awareness about. And people suddenly change. What happens is people see everyone else's biases and get annoyed.

 

75

00:19:04.400 --> 00:19:21.789

David Rock: And when we 1st started down this research, we basically found these findings that said, we shouldn't teach people about bias because it just annoys everyone seeing everyone else's biases. And we thought, Okay, well, we can't build bias training that just makes the whole problem worse and doesn't actually do anything.

 

76

00:19:21.890 --> 00:19:25.040

David Rock: And then we went away and worked out. It has to be a team level.

 

77

00:19:25.380 --> 00:19:39.989

Emma Sarro: Yeah, so so what do people do? A lot of? We have a lot of comments coming in on like, well, can't you just self reflect? Do we need to look at ourselves? So what like? What do we tend to tell people what to do? And we put that into one of our solutions which we'll talk about. But what do we tend to tell.

 

78

00:19:39.990 --> 00:20:00.349

David Rock: Let me walk through the research process. I think to answer that I think it's a great question. But the basic strategies right now scrape the surface of bias. And a good metaphor is like someone's not feeling well. And you basically say, you know, go and have a lie down. Is that going to help? Yeah, it's a lot of instances that's going to help.

 

79

00:20:00.350 --> 00:20:16.129

David Rock: But if a person's got gangrene right or having a heart attack, or, you know, got a serious virus or some other things that's, you know. Not necessarily the best thing, right? But they're a broad. We're using these very blunt instrument strategies that often work, but don't do that much.

 

80

00:20:16.691 --> 00:20:32.970

David Rock: And but they feel like we're doing something. And so yeah, I think, let me answer that with the sort of the process of the research. And then I'll let you dig into the real science of kind of the whole process. But basically, what what happened was we, we spun our wheels for like a year and a half

 

81

00:20:33.050 --> 00:20:36.249

David Rock: trying to. And this is between like 2010 and 2015,

 

82

00:20:36.310 --> 00:20:54.499

David Rock: just basically trying to find a way into doing something about bias, and we just couldn't find anything that really helped until we had this insight that it was a team level. And then when you sort of when you have that insight, you have to say, Oh, how do we get a team doing this? There's a hundred biases plus or minus 50. You can't teach a team

 

83

00:20:54.500 --> 00:21:06.490

David Rock: 100 biases, right? And so we thought, all right, we've got to simplify the biases. And I'd been reading Daniel Kahneman's Thinking fast and slow, and he had a clue in there. He actually said, Hey.

 

84

00:21:06.930 --> 00:21:24.730

David Rock: bias is impossible to solve. Maybe if we understood the brain and how biases are created in the brain. Maybe there's a solution, right? And so we followed that thread. And we basically started at like a 3 year process of trying to categorize those 100 or so biases into underpinning brain structures.

 

85

00:21:24.750 --> 00:21:32.330

David Rock: And it was it was a really difficult process. And what was really fascinating about the journey is that it was actually very clear that you could do that.

 

86

00:21:32.330 --> 00:21:56.910

David Rock: although the bias, you know, there was some debate. And all this, but it took us a while. We actually published, and then found that the framework wasn't being helpful, and we actually retracted it. And then we went back to the drawing board ironically, with a more diverse team, and we published a much richer framework that you'll walk people through shortly called the seeds model. But basically, what we found separate to the fact that you can't make individuals biased, less biased.

 

87

00:21:57.910 --> 00:22:16.860

David Rock: And this goes to the question of what should we tell people to do right. What we found is that biases are driven by such different metabolic, not metabolic, like cognitive processes, that there's no like universal strategy. Right? It is literally like someone saying, I'm not feeling well, and you need to know what it is to decide whether to give them an antibiotic.

 

88

00:22:16.860 --> 00:22:43.870

David Rock: a big glass of water, a meal, a nap, or amputate something like you need to know, like what? That was a little intense, that last example. But you need to know what's going on right? And it's the same with bias. There's a whole category of bias. It's basically about the way we categorize people as similar or different. There's a whole category of bias about avoiding danger. There's a whole category of bias about being too kind of sloppy. So there's really different categories of bias. And what we found is that

 

89

00:22:43.930 --> 00:23:08.280

David Rock: is that you need to know the category of bias to actually do something meaningful. So we didn't find a universal like strategy. If there was one, it would be, you know, get lots of different people who see the world differently to you to give you feedback on your decision, but that's an unnecessary intensity for some of the biases there are others. There are some that kind of you don't need to do that. But anyway, so that's what we found. And

 

90

00:23:09.370 --> 00:23:27.350

David Rock: like an accountability partner for bias where you've got shared, language is going to be dramatically more effective than any sort of self-awareness strategy. So whether it's your partner or a friend or your team like having a shared language with other people where you can gently non weaponize, you know, gently, like

 

91

00:23:27.440 --> 00:23:44.499

David Rock: kind of call out when these happen. That's like the most effective strategy. And it can be a team level or a partner whatever. That's by far the better strategy. It's a big, big, big, big outlier compared to like anything you can try and do with yourself. That's basically the the sort of cliff note there.

 

92

00:23:44.820 --> 00:24:04.589

Emma Sarro: Yeah, that's great. So, and we've launched this in many organizations. Hundreds of organizations, this our, our seeds model, which I'll talk about. But also some of our solutions specifically decide which is our 1st solution on really, how do you make better decisions by using this model and mitigating bias? So what have we learned about this since then?

 

93

00:24:04.590 --> 00:24:25.949

David Rock: Yeah, no, it's I. Just just when we were preparing for this, I was like, Oh, my gosh, we published this 10 years ago. We should be doing a 10 year anniversary session on the seeds model. We might do that at later time. And like, we'll study our data. We've actually got a huge amount of data on bias mitigation in terms of how people build the habits, what kinds of habits they build, like all sorts of really really interesting stuff.

 

94

00:24:26.455 --> 00:24:52.439

David Rock: So so you know, in 10 years we've learned a lot. One. The one thing is just, you know, high level like, don't go in and teach people about just people decisions, because the business decisions are just as important, and people are probably more likely to get on board if they realize they're making, you know, bad financial decisions, right? And as opposed to sort of people don't feel like they're making the wrong people decisions. We sort of trust our gut about people.

 

95

00:24:52.440 --> 00:25:17.429

David Rock: but many of us have had an experience of like making a bad financial decision and wondering why. Why did I do that? Right. So it's easy to sort of tap into people's motivations, intrinsic motivations. If you talk about business decisions 1st and then people decisions. And we saw that we've been continuing to hone like how we get people to understand this. One thing that's changed in the last 5 years is a lot of companies are just licensing

 

96

00:25:17.430 --> 00:25:23.420

David Rock: the seeds model, because what we found is that if you look at a whole curriculum for developing leaders

 

97

00:25:23.420 --> 00:25:46.399

David Rock: about a 3rd of the different modules that you have seeds is a really helpful tool, like the obvious one is, you know, decision making, but actually strategic thinking, conflict, resolution, feedback performance, you know, assessment. Hiring, obviously. But there's all these different skills that actually having seeds involved is really really helpful for it. We worked out roughly to about a 3.rd So

 

98

00:25:46.400 --> 00:26:00.912

David Rock: in the last 5 years a lot of companies will will just license the framework some of our assets and plug it in to to their architecture, and that's been an interesting model. About half our work now is about is is roughly licensing, which I think is a is a good model for us and for everyone. The

 

99

00:26:01.340 --> 00:26:28.750

David Rock: The other thing we've learned is just that, you know. In 10 years this is stood the test of time. No one else. Literally, no one else has done what we've done, because it's really hard to do, and it's possibly impossible to do it better. That's a little arrogant. Maybe I should slap myself for that, but I mean we we literally did a like publish something, and then did something better. There might be a better model. No one's done it. Don't take this as a challenge. It's really hard. But it's it's really useful.

 

100

00:26:29.098 --> 00:26:38.551

David Rock: And what we know is about 78% of people. I think we just did a refresh on the data with 75 or 76% of this is like tens of thousands of people.

 

101

00:26:39.710 --> 00:26:57.910

David Rock: basically use it weekly at least once a week that once they learn, they like at least once a week. They're calling it out or doing something differently as a result of. So that's a that's a big win. You know, if if 40 or 50 times a year. People are doing something now. They weren't doing before. That's a big win when you scale that

 

102

00:26:58.324 --> 00:27:19.519

David Rock: and and all of that. But you know it's I've introduced this to a lot of senior teams, a lot of senior leaders. And often the way this kind of initiative starts is a research briefing for the top team before we've even committed to anything just like, Hey, let's walk through the research. The most fun I had was with a movie studio with the CEO, a big movie studio, the CEO, the whole team. Basically, they said, Hmm.

 

103

00:27:19.690 --> 00:27:49.142

David Rock: let's unpack the bad green lighting decisions we made using this model like one way or the other, like the ones we did that we shouldn't have, and the ones we should have that we didn't. So fun. We spent a whole afternoon like looking at, you know why they didn't fund like some huge huge movies. And we detected the biases and all this stuff. It was really fascinating. So so anyway, it's it's really helpful. It's it's a little scary, you know. Someone's coming like it's it's a lot to see, like all our brains have, like a whole lot of bias.

 

104

00:27:49.480 --> 00:28:11.040

David Rock: And but you know, it's it's it's a really useful. It's a really useful thing. And you know, we've helped millions and millions of people now since 2015 millions and millions of people like notice other people's biases. And the best thing is not to keep finger pointing. But to put a system in place, put some kind of system in place that reduces the chance of bias at the source in your processes. That's ultimately the best way

 

105

00:28:11.040 --> 00:28:34.879

David Rock: to address bias. So you know, circling all the way back, even if you know, even if you know doing Dei work becomes, you know, literally illegal in in North America, in the Us. If it even became, you know, literally, you still got to find a way to do this because we have huge amounts of bias, and we make really poor business decisions. So even if we have to name it, something else, take it completely out of Dei.

 

106

00:28:34.900 --> 00:28:37.889

David Rock: not even just to do with putting the best teams together.

 

107

00:28:38.030 --> 00:29:01.359

David Rock: Really, it's super helpful for just making the right decisions overall, and it doesn't slow things down. It just provides a filter that helps you be more confident in your decisions overall. So I think that's a great place to end. I'm going to hand over to you, Emma. Let's take people through the seeds model. It's a big idea. I think I'll just let you wrap it up and

 

108

00:29:01.510 --> 00:29:02.030

David Rock: thanks

 

109

00:29:02.030 --> 00:29:17.670

David Rock: everyone for being here. We'll see you next week. What are we doing next week? We're doing a really fun one mandatory to compelling right how to make your whole like an individual program or a whole curriculum really compelling. So everyone actually comes in. We've got a science to that. We'll talk about that next week. Thanks so much, Emma. Thanks everyone. Bye, bye.

 

110

00:29:18.390 --> 00:29:41.989

Emma Sarro: All right. So he's given me the reins today, and which is great because normally I'm interviewing him. But this is such an interesting topic. And what I'm going to bring into the second piece of this for the next 20 min or so is just some really interesting research that proves the case for seeds. So seeds is great. It really, you know, in some ways, yes, teaching something new can be scary. But

 

111

00:29:41.990 --> 00:29:56.150

Emma Sarro: what seeds does, what this model does is provide a really simple framework for something overwhelming the number of biases that affect all of our decisions. I mean, if you think about the number of decisions that we make a day. It's like in the tens of thousands.

 

112

00:29:56.150 --> 00:30:01.339

Emma Sarro: Most of them are made with some kind of bias. And thank goodness, because

 

113

00:30:01.340 --> 00:30:29.050

Emma Sarro: we make decisions incredibly quick. If we had to think and and pause for every single decision we made, we would never make it past breakfast. We'd be thinking about every possible scenario. So yes, it's great to have bias when you're thinking about where to go for dinner, because you go to your favorite favorite spot, and you're you know you're going to get some great outcome. It's going to be a great dinner, because you always go to the same spot. Yes, you're likely missing out on some other great spots. But

 

114

00:30:29.280 --> 00:30:54.230

Emma Sarro: but you don't have to spend hours making that decision. So it is something that helps to simplify all of the 150 plus biases that affect our brain. And yes, everyone is biased. And that's an evolved response. And it's evolved for good reason we're able to make efficient decisions and not use all of our cognitive resources to look at every possibility. And it likely allowed us to survive. So that's why they're in place. And

 

115

00:30:54.230 --> 00:31:19.140

Emma Sarro: we don't have the ability to remove that evolved process physically. So we have to build in places or processes, as David said, to really just help us make those decisions quickly, but by mitigating the bias along the way. So we don't even have to think about the fact that we could have made a bias decision. We're just following some process that we put in place to help us mitigate it before we even have a chance to think about it, because a lot of times it is

 

116

00:31:19.140 --> 00:31:36.829

Emma Sarro: in self reflection or post OP. Kind of viewing of oh, we made that decision, and we probably missed that one opportunity along the way. So before even getting to that point, so what I'm going to do is just, I'm just going to spend a few minutes going through some of the like

 

117

00:31:36.830 --> 00:32:01.750

Emma Sarro: in the workplace. Examples from mostly academia studies, really proving the case that these exist and where they come up, and also what's going on in the brain? Because, as David mentioned, what we did along, the way is, we took all of these biases that were all being studied in multiple labs across the world in different scenarios and tried to categorize them, based on what the underlying core processes are.

 

118

00:32:01.750 --> 00:32:24.229

Emma Sarro: So, for instance, we often talk about the organizing principle of the brain, our ability to detect threats versus rewards and to kind of differentiate them and respond to potential threats. That is one kind of bias category. And that's very different than how we categorize people that are similar to us or on the same team. It's a different set of neural circuits. So there are different decisions, and they come up in different ways.

 

119

00:32:24.230 --> 00:32:42.469

Emma Sarro: They both can have a negative impact on outcomes. And sometimes sometimes they're great, depending on what kind of decision you're making. But in the workplace it's like trying to find ways to kind of mitigate certain processes. So you can still get that diverse team that has the best outcomes and include everyone in the conversation

 

120

00:32:42.540 --> 00:32:47.799

Emma Sarro: include all of those perspectives, because we know, based on the science that the outcomes are going to be better.

 

121

00:32:47.940 --> 00:33:11.409

Emma Sarro: So the 1st bias of seeds. So if I'm walking through seeds, the 1st bias is the similarity bias. And this is based on just our general natural instinct to be a part of a group, we are driven, we survive better. So this is something that we talk a lot about. When we talk about scarf, too, we have evolved to survive better in groups. So we tend to be driven towards

 

122

00:33:11.410 --> 00:33:29.510

Emma Sarro: groups of individuals that have similar motivations as us that that look similar to us, that have, that are part of our team. I mean, what's interesting about this. What we call the in-group bias is that it is as simple as taking 100 people, dividing them into 2 groups and saying, You know, you

 

123

00:33:29.510 --> 00:33:52.249

Emma Sarro: 1st 50 are Group A, your team, A, the second 50 are Team B, and you immediately have the benefits of being part of that in group. And so you can do. And this is called the minimum Group Paradigm, where you can essentially just group people based on. Oh, you're the red team, or you're the blue team, and you have these benefits of being part of this group.

 

124

00:33:52.250 --> 00:34:16.770

Emma Sarro: You tend to favor those individuals you tend to. You're motivated to help them. You tend to feel empathy for them in a different way. You understand them better even when they're talking to you. You understand what they're saying in a way that you don't, or individuals that you perceive as being part of your out group. So this can be. This can lead to definitely, can lead to different kinds of competition. We see some really interesting research looking at

 

125

00:34:17.099 --> 00:34:35.340

Emma Sarro: looking at the effects of you know how how rewarded you feel when you see, let's say, if you like a sports team like the sports team, that you are not a fan of lose, you feel more rewarded by that than you do when your own team wins. So there are definitely some really interesting outcomes of this.

 

126

00:34:35.340 --> 00:34:55.350

Emma Sarro: But it does come into different processes in the workplace. So, for instance, really interesting study that looked at impressions. So individuals were asked for their 1st and last impressions, let's say, before and after hearing either negative or positive behaviors of an individual.

 

127

00:34:55.420 --> 00:35:03.539

Emma Sarro: and what they found is that if you associate yourself with this person as being an in-group member, your impressions of them are stable over time.

 

128

00:35:03.640 --> 00:35:28.409

Emma Sarro: meaning that even if you hear negative negative evaluations, let's say this could come up in performance evaluations. If you hear negative evaluations of someone that's on your team in your in group. You're more likely to maintain that that impression of them, that positive impression of them over time, whereas if they're not part of your team. Your impression of them changes, or it gets more negative over time. So it's just more. Your impression will be more

 

129

00:35:28.410 --> 00:35:51.650

Emma Sarro: stable over time if they're part of your in group. Another really interesting study looked at the likelihood of funding or liking, evaluating positively an idea from someone else. So imagine this is like you're proposing an idea to your manager or to your leader, or you're having a direct report. Propose an idea to you how much you like the idea

 

130

00:35:51.650 --> 00:36:20.739

Emma Sarro: and are likely to fund the idea. Let's say, if you're making a budget, decision is related to how close the individual is hierarchically to you. So if you are receiving an idea from someone who is further, I guess you know, further away from you. Hierarchically, you're less likely to like the idea. And that's really interesting. It also has huge implications, for whether or not an idea is kind of like taken and run with, if someone is further away from you hierarchically.

 

131

00:36:20.740 --> 00:36:45.929

Emma Sarro: whether you feel like it's a viable idea business idea. So that's 1 or a couple of ways where similarity bias can affect a lot of things from hiring from performance, evaluations from even deciding what idea to experiment with and and how likely that's going to lead to go to market decisions or innovative decisions or things like that. So really interesting.

 

132

00:36:46.120 --> 00:36:56.130

Emma Sarro: Another bias, expedience bias, which is our kind of like our jump to conclusions bias, we make this. This is probably the classic case of bias

 

133

00:36:56.130 --> 00:37:12.230

Emma Sarro: it really stems from our default need to pull from the most immediate information like that information that's immediate to us. So we can make an efficient decision quickly, not pausing and looking at all the other possibilities. So doing, a Google search and choosing the top 3 to look at.

 

134

00:37:12.230 --> 00:37:31.549

Emma Sarro: So that's really interesting. And it comes into play a lot of times with evaluations, with hiring, with even deciding what organizations or vendors to use, for instance. So 2 really interesting examples. One of them that comes to belie is our halo bias where we have this tendency to believe that someone has

 

135

00:37:31.550 --> 00:37:45.159

Emma Sarro: a certain set of qualities like the most common one, is something like being tall, which is interesting and linking that to how well they can do their job, and how likely they are to, let's say, be able to do a leadership.

 

136

00:37:45.160 --> 00:38:04.890

Emma Sarro: have a leadership position. They're self-confident. They're able. They're authoritative. So these characteristics are often linked to height. What's interesting is a somewhat recent academic study found that being tall, or every inch above the average height of

 

137

00:38:04.890 --> 00:38:33.270

Emma Sarro: either men or women. Cross gender they controlled, for gender was worth about $800 more per year in earnings. This is probably even more now, because this was several years ago. Such an interesting study, and they controlled for gender they controlled for age, they controlled for weight. It's really interesting, that link between height and being more likely to be given a bigger salary. It's really interesting. Another very different outcome of this bias

 

138

00:38:33.270 --> 00:38:46.690

Emma Sarro: comes from a large scale study. They tested this in about 200 managers, and these managers were given a set of information from this took place in Australia, and these managers were asked to

 

139

00:38:46.760 --> 00:39:11.489

Emma Sarro: make decisions about working with vendors in different countries. So there was this kind of like distance thing that was kind of involved in this, but they were. They were provided with a number of pieces of information about these different countries, and the vendors that were found in each of them, and made decisions on whether to work with them or not, and I'm forgetting some of the details about this. But what they found was that

 

140

00:39:11.680 --> 00:39:33.569

Emma Sarro: the managers only processed the information that confirmed what they had already thought about these countries, for instance. So this is an example of the confirmation bias, where we we tend to look for information that confirms what we already know or think about about. Let's say, a vendor, a person, another organization, whatever it is.

 

141

00:39:33.570 --> 00:39:50.419

Emma Sarro: and we only process that information. So we might not even process, perceive, or even think we saw, the information that disagrees with what we already think about this. So you can imagine all of the decisions that are then made. As a result of this.

 

142

00:39:50.420 --> 00:40:15.279

Emma Sarro: and in the deciding process they might even look back and not realize that they missed very important information, because their awareness of what they kind of took in was all based on what they already confirmed. So they were just confirming, and that really linked to this idea that we are biased to confirm information. It feels good when we are faced with information that disagrees with something we already

 

143

00:40:15.280 --> 00:40:33.139

Emma Sarro: think. That's a bit of a threat as well. And so we tend to kind of avoid that, and we avoid it so much that our brain processes don't always kind of integrate that into our understanding, really, really interesting. So obviously, like lots of downstream effects of expedience bias.

 

144

00:40:33.460 --> 00:40:45.789

Emma Sarro: Another interesting bias is our experience bias. So experience bias is really that, hey? What I see in the world is exactly what's going on.

 

145

00:40:45.790 --> 00:41:10.030

Emma Sarro: And that couldn't be further from the truth. So your experience is going to be highly different than every other person in this world. And that's because what you experience in the world is going to be mixed with your expectations of what you've already experienced, and which the 1st thing that comes to mind when I think of experience bias is eyewitness accounts, which is, there's it's so interesting and so scary at the same time that

 

146

00:41:10.030 --> 00:41:21.729

Emma Sarro: there's so much weight placed on eyewitness accounts, and what anyone is is kind of observing at any time, and what what anyone is is kind of allowing what their brain is allowing them to

 

147

00:41:21.730 --> 00:41:46.389

Emma Sarro: even understand and perceive is going to be different than anyone else. It's going to be impacted by what they've already experienced. So the experience bias is basically that, like, I perceive my experience to be the truth, and everyone else probably does as well. And so this is going to come up in the workplace in many different ways, managing, hiring. So some really interesting

 

148

00:41:46.776 --> 00:41:52.670

Emma Sarro: like sub biases that are part of this. One of them is called the illusion of transparency.

 

149

00:41:52.670 --> 00:42:17.670

Emma Sarro: What this comes up a lot in is when we communicate with others. When you communicate with anyone actually right, we just assume that when we're sharing information like I'm making the assumption right now that what I'm talking about is going to be equally as understood as the way I say it, that I don't need to ask for clarity or anything like that, that what I say to someone is going to be equally understood.

 

150

00:42:17.670 --> 00:42:42.629

Emma Sarro: So if I'm managing someone and I'm providing constructive criticism, I'm expecting them to understand it. The way I say it, or another way is, I'm explaining what I need a direct report to complete for me. So this is your task. You have to do this, this and this. I'm just expecting this to happen exactly as I share it. So really interesting results from studies that look like this.

 

151

00:42:42.630 --> 00:42:54.599

Emma Sarro: that that. Look at this. So this one is and is super applicable where a set of managers were asked to provide feedback to their direct reports, and this feedback was going to come in either positive or negative.

 

152

00:42:54.960 --> 00:43:17.560

Emma Sarro: And they were then also asked, Okay, so how? How positive was the feedback? How negative was it? And also how likely will this person get a promotion? So the managers, you know, said this negative feedback was about like a 3 on a scale of 10, let's say, and their likelihood of promotion based on this is maybe like 20%.

 

153

00:43:17.660 --> 00:43:33.699

Emma Sarro: Well, the direct reports were also pulled on. What they assumed was this feedback, and they reported that the negative feedback was actually more like a 6 on a level 10, that it was much more positive than the manager reported, and then their likelihood of

 

154

00:43:33.700 --> 00:43:57.959

Emma Sarro: promotion was much higher. Obviously, it was like a 60%. And what's interesting about this study is they also polled people that were not a part of this, they didn't have any stake in the game. They were just kind of observing. These observers also observed a much higher level of like. They matched the direct report, meaning, it wasn't that

 

155

00:43:58.030 --> 00:44:27.520

Emma Sarro: this individual was, you know, protecting themselves by only hearing the good things it was actually in how the manager was communicating. So this is called the illusion of transparency, and it really requires. If there's a process which is what we like. The weave throughout. This whole thing is, if you have a if you have a process in place, it helps you like kind of bypass some of these, or mitigate some of these is check for clarity. Right? So ask, you know, like, what did you hear? Because that, you know, increases your chances to see where the disconnect is.

 

156

00:44:28.060 --> 00:44:50.610

Emma Sarro: Another. Like very similar example of this is called the false consensus effect, which is very similar, but a slightly distinct in that where individuals kind of project their own self, view, their own self ideas, their own consideration of their own skills into a situation. So

 

157

00:44:50.610 --> 00:45:07.500

Emma Sarro: in this specific case, in a study that kind of like pulled this idea apart. They had a group of individuals kind of rate themselves on their ability to do different things in the workplace like work well with others, their ability to

 

158

00:45:07.570 --> 00:45:31.669

Emma Sarro: to do a number of different skills to their job, satisfaction like how they how they do different tasks. So they rated themselves, and then they listened to someone else, tell the story of their ability to do XY. And Z. The same things, and then they rated them and said, Oh, this person is going to be like good at this, and not good at this or whatever. What they found was that the individual who was rating

 

159

00:45:32.136 --> 00:45:38.670

Emma Sarro: person, B, let's say, integrated their own idea of how they thought they did

 

160

00:45:38.670 --> 00:46:02.799

Emma Sarro: into this person's rating. So let's say, this person has, like a very low job satisfaction person. A would then integrate that low level of job satisfaction into the other person's, and it wasn't actually what the other person was telling them. So you can kind of see how this can come up in performance. Evaluations in hiring practices, in even like.

 

161

00:46:02.800 --> 00:46:26.300

Emma Sarro: you know, like idea generation, and like, whether or not to explore experiment with some idea for some innovative project, someone's like own perception is going to be integrated. An idea is going to be integrated into that idea, and it's going to merge over time. Not always a bad thing, but when it comes to, you know, making a decision on whether to let's say, hire someone or promote someone or something like that.

 

162

00:46:26.300 --> 00:46:47.800

Emma Sarro: This is going to have a huge impact. So this second effect was called the false consensus effect, and they're both very similar. So you can see in this case is a really good example of how these 2 biases are very similar. It's the same kind of idea that we just like assume our experience is going to be similar to others or the right, one or the same one, or

 

163

00:46:47.800 --> 00:47:08.560

Emma Sarro: the common one, whatever it is but slight differences. You can see how, when we, when we kind of did our search of all of these biases. It's easy to clump some of them together because they're based on very similar brain processes. And this one is like how our brain forms and reforms our expectations of the future constantly.

 

164

00:47:09.023 --> 00:47:16.969

Emma Sarro: And it's very similar to like how we form expectations of others. The same kind of brain processes. But it comes up here as how we make decisions.

 

165

00:47:17.540 --> 00:47:44.549

Emma Sarro: So the next bias is distance. So distance is interesting, very different than experience bias in that. And experience in that we tend to value things, people, ideas, jobs, tasks, anything like that, different, depending on how close it is to us in distance, time, effort, level, anything like that. So we change the way we view things based on

 

166

00:47:44.550 --> 00:48:09.540

Emma Sarro: distance. And this is based on this kind of like this network we call the proximal network which proximity network, I should say, which really really distinguishes distance, and it helps us to understand things that are close and far. And and it comes up. Obviously, it's going to come up a lot in hiring decisions where someone is located virtual versus a hybrid versus in the office decisions, even where, like, who are you

 

167

00:48:09.540 --> 00:48:32.100

Emma Sarro: to ask an opinion of in the office like the office next to you, or the office on the other floor? Something like that. So it's, you know, who are you going to pull perspectives from for a performance evaluation? It's going to affect all of those kinds of decisions, especially if we're under pressure under stress other ways, this comes up is interesting, though. So, for example.

 

168

00:48:32.310 --> 00:48:58.930

Emma Sarro: procrastination is a form of a distance bias. And this is where it comes up. What procrastination is is putting off a task that seems effortful right now. And what some studies have shown is that when we're provided with an effortful task, or one that seems to be, you know, difficult. And either we don't understand what to do, or you know it's going to be. It's a large lift. We tend to just

 

169

00:48:58.930 --> 00:49:23.749

Emma Sarro: put it off for the next day, and the reason is that we actually perceive the effort to be less tomorrow, let's say, or next week. Whenever I whenever I think about the procrastination bias, this bias, I think about laundry, right? So the the amount of time. That laundry clean laundry sits in the hamper is usually at least one day, because it does feel like it's going to be a lot easier to do on Sunday as opposed to Saturday.

 

170

00:49:23.750 --> 00:49:47.750

Emma Sarro: and that is an example of procrastination. It just feels at the moment that the effort is going to be less on Sunday, which it won't be. It's going to be the same task. So this is this leads us to sometimes it's 1 day, sometimes more. But yeah, it leads us to just make that assumption and put those larger tests. So you can imagine how it comes up in work is, you have

 

171

00:49:47.750 --> 00:50:04.979

Emma Sarro: a list of priorities. But you'll do those easy things first, st and that might not always be the best decision given that some of those, some of those tasks that are effortful are actually more important. So how do you kind of make that decision?

 

172

00:50:05.240 --> 00:50:34.699

Emma Sarro: And lastly, oh, and another way, this comes up, which I think I should just mention. Really quick is the effective forecasting, which is a little bit different than procrastination. But it's really how we assume our emotions are going to be in the future. So what's interesting about emotions is that we have generally we have negative and positive emotions, and our assumption of how negative we feel a certain event will be later, is going to be different than how

 

173

00:50:34.700 --> 00:50:40.700

Emma Sarro: how positive we feel. It will be later. So let's say you're met with some decision

 

174

00:50:40.920 --> 00:51:05.250

Emma Sarro: that might that you think well down the road. I'm going to be really upset about this kind of thing. It's going to affect me in some way you might change the way you you might assume that you're going to more or less negatively affect it, or have these negative emotions that actually won't be the case when down the road. But you make that decision based on the emotions, because we're just really bad at predicting how emotional we're going to feel

 

175

00:51:05.578 --> 00:51:12.740

Emma Sarro: in 2 months, let's say, based on a decision that we make today. So we're just bad at predicting our emotions in the moment.

 

176

00:51:13.600 --> 00:51:21.100

Emma Sarro: Okay, so the very last one is also a set of really interesting research, which is our safety bias.

 

177

00:51:21.100 --> 00:51:45.460

Emma Sarro: And this is based on something. For anyone who knows. Nli knows that we talk about this all the time, and that's because it is like one of the core organizational principles of the brain. We call it the organizational principle of the brain, because it is the thing that kept us alive, avoiding potential threats. So we're really good at avoiding potential threats. And in the face of potential. And I say potential, because a lot of times we

 

178

00:51:45.460 --> 00:51:52.959

Emma Sarro: we respond to things that aren't necessarily maybe threatening to someone else. But it feels that way. So we avoid that.

 

179

00:51:52.960 --> 00:52:07.460

Emma Sarro: we respond much quicker, a greater response. We react faster than potential rewards. So given that anything that is potentially going to lead to some kind of risk loss.

 

180

00:52:07.460 --> 00:52:30.119

Emma Sarro: anything like that, or a blame, anything that could be a potential threat. We avoid that. And our decisions are based on that. So like risk aversion, loss, aversion, those kinds of decisions are based on this principle. So it's a whole different network that we're kind of like that. We're engaging when we make these kinds of decisions. So we avoid that. So where does it come up in the workplace? Obviously

 

181

00:52:30.380 --> 00:52:53.330

Emma Sarro: comes up in many ways you can think about, like all the risks that we whole parts of departments of organizations are based on on assessing risk, right making these decisions. But it comes up in all of our other decisions. Right? So really interesting bias here is called the Sunk Cost bias, and the sunk cost bias or sunk cost effect is

 

182

00:52:53.330 --> 00:53:06.509

Emma Sarro: essentially that once you've started down some road towards some project you've already put in time, you put in money, you put in your effort. Once learning that the project won't necessarily be successful.

 

183

00:53:06.510 --> 00:53:31.480

Emma Sarro: we tend to continue doing it, even if we know it will fail, because we've already put something into it. If you knew this information before you started the project, you would never start it. You wouldn't put anything into it. It's called the Sunk cost. So like you've already sunk costs into it, Maria. And what interesting is that? There is this one study, that that kind of looked at this and looked at like, how far will someone go

 

184

00:53:31.710 --> 00:53:57.450

Emma Sarro: to finish a project? And in this example they found that once individuals started a project, even if they knew they were going to lose. And even if it meant continuing to do the work, if it hurt other people so, making immoral decisions, individuals are more likely to make these immoral decisions if they've already sunk costs, which is, it's wild to think about.

 

185

00:53:57.580 --> 00:54:21.569

Emma Sarro: But the fact that you'll you'll make these decisions, even if it means costing other people in different ways. So it's really interesting stuff. And another another way this comes up is just in how we avoid blame. Right? So we will. If we are making decisions for other people, we will be less risk, averse.

 

186

00:54:21.570 --> 00:54:46.560

Emma Sarro: We will be more risk, averse sorry than if we know that we could possibly be blamed, or be the reason why, you know, a set of individuals are losing something. Let's say so. This comes up in a really interesting way that researchers studied. This is, they had people blowing a balloon, essentially putting pumps of air into a balloon, and they made more money every time they pumped air into the balloon.

 

187

00:54:46.560 --> 00:55:11.980

Emma Sarro: and at some point the balloon pops. You know how many pumps of air can you put in before the balloon pops, and when it pops and you lose all your money. And so they looked at this, based on whether an individual is making decision for themselves or for a group of people. When they made decisions for the group of people, they put less air into the balloon. They were less likely to be the one to pop the balloon because they didn't want to cost everyone else

 

188

00:55:11.980 --> 00:55:16.650

Emma Sarro: the money. But when they were doing it for themselves they were more risk taking

 

189

00:55:17.494 --> 00:55:34.070

Emma Sarro: so really interesting stuff when it comes to managers making decisions, especially now making these very difficult decisions for their team or their organization. Are you making the best decision. You're more likely to kind of like, avoid those those risks.

 

190

00:55:34.140 --> 00:55:51.559

Emma Sarro: And finally, the last thing which is really interesting, and this probably affects all of the biases absolutely is what happens when we're stressed. So when we're stressed, we actually are less loss. Averse. So think about this.

 

191

00:55:51.640 --> 00:56:12.670

Emma Sarro: We are all stressed right now, organizations are feeling under pressure. There are stressors coming in from all sides. And we're also still making decisions. Loss aversion is actually less. So. You're not affected as much by it. So the decisions you're making are not as impacted by the potential of loss

 

192

00:56:12.670 --> 00:56:34.899

Emma Sarro: when you're under stress. So think about like, now, we're under pressure. We're under stress. And the decisions that we're making might seem a bit more rash. So you can imagine how someone's decision might feel rash or seem rash on the outside, because they're under stress, and they're not seeing the loss as much as you might be from the outside.

 

193

00:56:35.140 --> 00:56:44.069

Emma Sarro: So the D stands for distance. Yeah. All right. So all of that, to say, really interesting ways. All of the seeds biases

 

194

00:56:44.070 --> 00:57:06.199

Emma Sarro: come up in the workplace. One of the things that David said, that is absolutely true woven throughout. All of them is, if you have good processes built in place, you can make decisions and not think about the biases that might be affecting each of them, because there are a lot. So processes are such a great way to kind of mitigate all of them.

 

195

00:57:06.200 --> 00:57:31.199

Emma Sarro: But also there are also ways to kind of pick each of them individually. So making decisions for someone else, or collecting diverse perspectives for something like expedience and experience bias. So all of those. So before I forget, we, as David said, we worked these into our decide solution, and many of our other solutions

 

196

00:57:31.200 --> 00:57:38.919

Emma Sarro: like select or differentiate. We talk about licensing fees which can be worked into, as you can see, every part of your business.

 

197

00:57:38.920 --> 00:58:00.309

Emma Sarro: I'd love to know how we can help you here. I know there's going to be a poll popping up, and also any of your questions. I know this. There have been a lot of questions in the chat. Yes, Julie asked. If we're more reckless under stress, I think it might. I think we might make decisions very differently under stress.

 

198

00:58:00.310 --> 00:58:19.650

Emma Sarro: then, at least with safety bias. The last one, we're more likely to make decisions that doesn't consider loss as much. So maybe we're more risk taking like, maybe that's a good thing in some cases. But also it just means that your decisions are going to be different.

 

199

00:58:19.990 --> 00:58:46.190

Emma Sarro: And and it might affect the biases differently. So you might be more so, maybe, as it refers to similarity, bias, stress might make you even even closer to your in group as opposed to like being able to work with an out group or something like that. So I would say, that could just be like a topic for another episode. I bet we could go into all of these biases

 

200

00:58:46.190 --> 00:58:53.440

Emma Sarro: and explore how stress kind of differentially affects each of them. So great question.

 

201

00:58:53.440 --> 00:59:16.870

Emma Sarro: And yeah, so I know that we're coming right up a time. And I didn't realize that we are getting close to the end. I hope that you all enjoyed this and just a few tiny announcements before you guys all jump off. I know we have a big announcement next week in regards to our summit event. We have it every year. This year it will be middle of November, November 12, th and 13th virtual. So

 

202

00:59:16.870 --> 00:59:24.300

Emma Sarro: look out for that announcement. Next week, we're going to have everything up on the website. We'll be exploring this topic as well as many other topics.

 

203

00:59:24.300 --> 00:59:54.100

Emma Sarro: If you enjoyed today's episode, you'll love our podcast show. We have them all up there. Look for your brain at work wherever you enjoy listening to podcasts and on behalf of myself and David and everyone behind the scenes. I really enjoyed this discussion. I'm hopefully I didn't talk too fast. I was trying to fit all of this in time, and I hope you all join us next week. We'll be talking about how to make something compelling. I'll be back here with David, and hope you all have a great weekend.

 

204

00:59:54.570 --> 00:59:55.390

Emma Sarro: Thanks.