Your Brain at Work

How NILES Can Make Leaders Smarter

Episode Summary

Today’s leaders are constantly trying to keep up with the complexity of the organizational landscape, all while making smarter decisions and driving their teams towards innovation. Join us for the next edition of "Your Brain at Work Live," where Dr. Emma Sarro discusses how NILES, the NeuroLeadership Institute's AI leadership coach, is set to revolutionize how leaders face each of these challenges. Most AI chatbots are designed to spit out answers like a simple search engine, which research shows only dulls our thinking. NILES is designed to generate moments of insight, sparking motivation to act, creativity and growth. In this session, you’ll discover the scientific foundation of NILES and how it can guide leaders to tap into their innate creativity, facilitating moments of insight that lead to transformative solutions. Drs. Rock and Sarro will discuss the science behind insight generation, pulled from over twenty years of coaching experience and peer-reviewed research. They’ll share how NILES is already driving leaders to insight in every conversation, increasing their critical thinking, creative problem solving, and strengthening their EQ. Don't miss this opportunity to discover how NILES can be your partner in achieving smarter, more effective leadership.

Episode Transcription

WEBVTT

 

1

00:00:04.050 --> 00:00:07.290

Emma Sarro: Hello, everyone! Happy Friday!

 

2

00:00:08.450 --> 00:00:12.169

Emma Sarro: Welcome to another episode of your brain at work! Live

 

3

00:00:13.150 --> 00:00:17.110

Emma Sarro: as you're all joining. Find the chat button.

 

4

00:00:17.270 --> 00:00:28.980

Emma Sarro: Drop in the chat where you're coming in from today. You know the you know the deal for anyone who's been here. We love. We love chat interactions today, especially because it's just me

 

5

00:00:29.810 --> 00:00:32.460

Emma Sarro: awesome. California. Very nice.

 

6

00:00:34.930 --> 00:00:56.539

Emma Sarro: Nice to see everyone. So as you're continuing to join again, just drop in the chat where you're coming in from today. My name is Dr. Amasaro. I'm the senior director of research here at Neuroleadership Institute, and I am also your host today and your single speaker. Today we have. David can't make it today. So he left the reins to me, which is.

 

7

00:00:56.540 --> 00:01:05.469

Emma Sarro: I don't know if that's scary or exciting for people, because I'm going to dive into some research, and I'm hoping that everyone.

 

8

00:01:05.470 --> 00:01:23.059

Emma Sarro: you know drops in when you have a question, a comment, whatever it is, I want to make this as interactive as possible. And and also we're coming off of a North America holiday. I know. So I know the group might be a little bit might be a little bit smaller today.

 

9

00:01:23.190 --> 00:01:39.539

Emma Sarro: So for anyone who is a regular welcome back, and for newcomers, welcome, we're so excited to have you here with us today for the 1st time again. Keep dropping in the chat where you're coming in from today. What we're talking about is AI, and specifically how it impacts how we think

 

10

00:01:39.540 --> 00:02:04.540

Emma Sarro: so. It's going to be a bit of a Meta conversation. For sure. The question is, is there a way to use AI, and still think well, get smarter as opposed to a lot of the continued underlying worries about diminished thinking. And there really is some research coming out, too, about just showing how using AI and defaulting to AI in certain ways might actually just be worse for our overall thinking, ability

 

11

00:02:04.540 --> 00:02:28.729

Emma Sarro: and ability to like, make decisions and think critically and creatively, and all of that. And there really is some compelling evidence suggesting that, but really based on how we're using it. And of course, for anyone who's been with us for a while, we have been talking about Niles, which is our own neurointelligent leadership, enhancing system and coach niles specifically, which is what we're talking a lot about.

 

12

00:02:28.730 --> 00:02:35.659

Emma Sarro: and how we've designed Niles to be a bit more to really tap into how we think better.

 

13

00:02:35.710 --> 00:03:04.490

Emma Sarro: So we'll talk about that, and like, what are the aspects of like really engaging our brain in the right way? And how would you work with an AI, especially Niles, to really engage the best parts of your brain to kind of keep your brain kind of sharp. Essentially. So, as we continue getting into the conversation. Remember, I would love it if you were. Just push those those other distractors out of the way. Turn your phone over, close all the windows, and just kind of enjoy this, and engage as much as you can.

 

14

00:03:04.490 --> 00:03:27.129

Emma Sarro: So we have for anyone who who has been following us. We have developed a breakthrough in leadership Niles, which we believe can make leaders smarter faster. And this is our coach, Niles. We've been talking a lot about it next week. Specifically, David Rock and Rachel Cardero, our Vp. Of solutions, and, importantly, has been integral in developing Niles.

 

15

00:03:27.130 --> 00:03:50.830

Emma Sarro: We'll be going into what this looks like and what it can mean for organizations as a whole. So like, what does the organization look like that are using it? How do you develop this coherent system of insights. Today, I'm going to focus a little bit on what we're learning about AI in the brain. And because this has really informed what we need to what we need to really create our own AI coach, and why Niles

 

16

00:03:50.830 --> 00:04:15.759

Emma Sarro: being designed in how it is and what we've understood of neuroscience, how it can change the course of how we're thinking. So instead of defaulting to AI, how are we collaborating with AI and many of us, as we know there's this mad rush to continue integrating AI into our work. How are you using it? Is it a tool? Are you collaborating with it? Are you training it

 

17

00:04:15.760 --> 00:04:22.759

Emma Sarro: to work with you? Do you have a team of AI now? So everyone is kind of using it in different ways.

 

18

00:04:22.760 --> 00:04:47.750

Emma Sarro: And what's interesting about AI is that, have we really developed the right skill set to use it in the best way? And I think it differs, depending on what type of thinking you're trying to do with AI. What is the project? What's the problem that you're trying to solve with AI, and that will really differ and change how you use it. So if you're using AI to provide

 

19

00:04:47.750 --> 00:05:12.710

Emma Sarro: a bunch of ideas for you that has been shown to boost your creativity because it provides a bunch of potential ideas. But if you're using it to write something completely from scratch, you're not invested cognitively in that process necessarily. And so that that might actually reduce your ability to kind of understand what you're putting out and what your deliverables are. So how are you using it

 

20

00:05:12.790 --> 00:05:37.759

Emma Sarro: to solve whatever problem you're trying to solve. And that's going to change like how you leverage it, just like any tool. If we think about AI as a tool. So the 1st question, then, is really like, What are we? What are the potential consequences of defaulting to AI? You know, how could it diminish our thinking? And this is really stemming from like, what we understand about

 

21

00:05:37.800 --> 00:05:49.890

Emma Sarro: what humans need to to engage their brain, what humans need to collaborate with others like, what are the benefits we get from collaborating with others. And

 

22

00:05:49.890 --> 00:06:14.889

Emma Sarro: one of the things that fascinates me about the brain is how social it is. So. One of the biggest consequences, at least in my mind, of now collaborating with AI as opposed to other humans is, we're losing the benefits of our human connection, and something that I just had a conversation with a group of individuals.

 

23

00:06:14.890 --> 00:06:38.039

Emma Sarro: and we do at Nli. If you're not aware we do an individual education, we have whole individual education program. And one of them is the certificate for neuro leadership, which is a 6 month course. And we talk about all things neuroscience. So we get deep into all of the neuroscience and psychology and cognitive like behavioral research

 

24

00:06:38.040 --> 00:06:58.220

Emma Sarro: as a way to understand how we regulate our emotions, how we work with others, how we influence others, how we lead teams. So all of those things we apply. It's an incredible course. I love working and thanks Tony, for dropping that in it's a great course. But one of the things we were just talking about is we were covering what we call the social brain.

 

25

00:06:58.220 --> 00:07:15.760

Emma Sarro: and the social brain is really just all of the areas of our brain. Liz, I know it's such a great course, and I learn just as much, I think, as individuals who take the course because I learn how you're applying the work to what we're doing. But

 

26

00:07:15.940 --> 00:07:35.309

Emma Sarro: or the research how we apply the research to what your challenges are. But we were just talking about the social brain, and just how much of our brain, all the regions of our brain, all the different regions. How many of them are involved in understanding our social environment? One of the most fascinating piece of this is.

 

27

00:07:35.310 --> 00:07:55.550

Emma Sarro: there's this area or network of the brain called the default brain network fault network. And it takes up a huge part of our brain and it's the default underlying default. Brain activity is engaged when we kind of walk away from our computer, or we're mind wandering.

 

28

00:07:55.550 --> 00:08:20.549

Emma Sarro: And that's 1 of the pieces. We one of the reasons we suggest to take that, that mind wandering time to walk away from your computer. This is when you have your insights and all of that stuff. But what's fascinating about this brain area. Is that the 1st place it goes when you do walk away is to the social understanding. So it's been highly linked to understanding, like your

 

29

00:08:20.550 --> 00:08:45.549

Emma Sarro: social situation like these experience that you've had like, what did it mean socially for me? You kind of ruminate on all of those thoughts, and we know we do that when we when we step away from our computer, and we're kind of like kind of walking through our day or when you're driving home from work. Which is another time that a lot of us have this, these thoughts. We're thinking about the social interactions we care so much about the social interactions.

 

30

00:08:45.550 --> 00:09:15.340

Emma Sarro: And the whole point of this whole kind of derailment of this discussion is just to kind of highlight the idea that that social connection is so deeply embedded in the way that our brains are designed. We focus on other individuals. Our attention is drawn to the social interactions that we're having. There's a whole piece on social pressure, positive social pressure. We tend to do things based on what others are doing. So when we're

 

31

00:09:15.340 --> 00:09:42.149

Emma Sarro: trying to change behavior of an organization develop new habits. The positive social pressure is a huge aspect of it. And how are others engaging? We look to others, especially others in leadership positions. We've been talking a lot about this idea of neural synchrony which happens when you engage in conversations with others, you develop this shared understanding. And that happens with humans.

 

32

00:09:42.430 --> 00:10:07.430

Emma Sarro: Now you can understand things that AI is providing you. But you won't have a neural synchrony with AI. You might have. You might understand what they're giving you. But you're not developing neural synchrony with them, a shared understanding with them. And these are all fascinating outcomes of collaborating with humans, being able to pull diverse perspectives and and understand

 

33

00:10:07.430 --> 00:10:32.229

Emma Sarro: understand those diverse perspectives that are being provided to you. Those are all human interaction. So that's a benefit. And we get a reward from that kind of benefit. So that kind of piece will change when we interact more with AI agents. So there's been a lot of discussion around. You know all individuals who now become leaders of a group of

 

34

00:10:32.230 --> 00:10:54.050

Emma Sarro: AI agents that will be on their team. So that kind of collaboration is going to look a lot different, and you might not have the same kind of like social reward, or you won't have the same kind of social reward. You'll definitely be able to have some kind of connection. I know I just came across this, and I'll drop this in the chat. One of the things that I'll do, because it's just me up here today is I'll drop some

 

35

00:10:54.050 --> 00:11:19.049

Emma Sarro: some recent, you know studies that I've been seeing that kind of go along with this conversation. But this, this right here, is an article that just talks about. You know how we feel in terms of social connections with just some ideas, some thinking that people have had. You know. How are we? How rewarded will we feel by having these social connections? And one of the things that is true is that

 

36

00:11:19.050 --> 00:11:44.039

Emma Sarro: humans are really good at telling the difference between what AI is giving us versus what humans are. And that's going to change the way we make decisions, how we feel about artwork, for instance, we are able to tell the difference so far, for the most part, between artwork. We're able to tell the difference between advice given from AI versus humans. So we are very good at telling that difference.

 

37

00:11:44.040 --> 00:11:51.490

Emma Sarro: and we feel differently about it. How do we trust AI versus developing trust with a human?

 

38

00:11:51.530 --> 00:12:16.450

Emma Sarro: So all of that comes into it the other kind of consequence. So I just totally derailed us from that conversation. The other consequence of using AI is this kind of like race to the middle we call this mediocrity, and there's been plenty of work showing that the variability that you're going to get from from AI is going to kind of like veer towards the middle, because it's always a synthesis of what's

 

39

00:12:16.450 --> 00:12:33.110

Emma Sarro: already been created. Now we we talked about this earlier, when it when it came to creativity. There was this study, and I'll drop this this blog in the chat, too, there was a study that came out recently or last year that looked at creative outputs

 

40

00:12:33.110 --> 00:12:46.270

Emma Sarro: based on whether we were leveraging AI for ideas versus not. And while the creative ideas that were provided by, I think it was Chatgpt, did help

 

41

00:12:46.270 --> 00:13:09.949

Emma Sarro: kind of boost, like the average of creative creativity, like essays like creative writing that individuals are doing based on those ideas. So it did actually boost creativity or the creative ideas that someone could write about. But in the variance of creativity was lower so, and that makes a lot of sense, because the ideas are going to

 

42

00:13:09.950 --> 00:13:29.960

Emma Sarro: pulling from the same pool all the time. And so the variance is going to be lower. So while using AI in some ways can maybe boost and give you some ideas, the variability of creativity is going to be less, that reducing that collective diversity. So it's kind of just like veering towards the mean.

 

43

00:13:29.960 --> 00:13:43.030

Emma Sarro: Another huge kind of consequence of kind of relying on AI. One of the things that we talk a lot about, and Niles has been designed specifically to kind of counter. This is the loss of insights.

 

44

00:13:43.250 --> 00:14:13.220

Emma Sarro: We talk about insights all the time. Those Aha moments insights are inherently personal insights are things that we have on our own. It's ideas that we have based on problems that our brain is kind of solving in the background. It isn't something that you can be given from someone else. So an insight is just you forming a connection based on something that you've been trying to solve, and the outcomes of having an insight, especially a very powerful one

 

45

00:14:13.220 --> 00:14:38.169

Emma Sarro: are. Is this powerful motivational drive? The emotional satisfaction of solving this thing, of of reaching a conclusion to something that's been bothering you to having like a very clear motivation to act so. The the drive to act on that insight is is huge, and all of those things all of those we call these like the variables of insight

 

46

00:14:38.170 --> 00:15:03.169

Emma Sarro: you wouldn't have if you're given the answer, so it might be nice to have the answer given to you. But that drive to act on. That answer won't be there, and this is one of the loss of insights or the benefits of getting others to generate their own insights is a major part of our coaching models, for instance. So you know, asking questions. To get others to insight

 

47

00:15:03.170 --> 00:15:13.740

Emma Sarro: is incredibly powerful because they'll act on those insights, and one of the biggest differences is when you tell others to do something based on your own insights.

 

48

00:15:13.800 --> 00:15:29.419

Emma Sarro: Their likelihood of acting on that is going to be much less like giving someone advice as opposed to finding it out on your own, you know, making your own mistakes and solving that problem on your own is incredibly powerful for the kind of deep learning that you get as a result.

 

49

00:15:29.630 --> 00:15:58.950

Emma Sarro: So that's another thing that you lose by kind of defaulting to AI. Of course, there's also the inherent biases that are a part of everything that AI has built. It has the entirety of all of the human biases. And so by not kind of recognizing that and building mitigation strategies to kind of check the answers. And that's 1 of the biggest, I would say pieces to working with AI is

 

50

00:15:59.210 --> 00:16:28.700

Emma Sarro: is kind of retaining that checking like I'm going to check my assumptions just like I would check an individual that I was working with. So if you're collaborating with AI, then you would also check the answers that you're getting from a human that is, giving you ideas as opposed to AI. So in the same way, all of these biases are there, I think we tend to have. It's probably another one of our cognitive biases of generally taking

 

51

00:16:28.700 --> 00:16:53.680

Emma Sarro: answers given by technology as maybe more truthful, because it's not coming from a human that you can actually see, whereas from a human, we apply all of those biases to that human, and then we generally will like, take the answer or not, depending on the biases that we've applied to that human. So I think the recognition that you also need to check all of the answers provided by AI is

 

52

00:16:53.680 --> 00:17:07.449

Emma Sarro: another piece to the puzzle. And one of the things that's coming up a lot is then what's happening in the brain over time like, how is our brain changing when we default to AI?

 

53

00:17:07.700 --> 00:17:36.179

Emma Sarro: And there's some really interesting research that's coming out now. And there's definitely lots of research showing that the benefits of using AI in certain ways, let's say, and like in helping us, providing us with some ideas to jump, start our thinking or things like that. But there's also some really powerful studies looking at the long term effects of, let's say, critical thinking by using AI. And so one of the things that's come up, and I'll just drop this

 

54

00:17:36.320 --> 00:18:05.190

Emma Sarro: link in the chat. It's not super new, but it is relatively powerful on the effect of critical thinking. This idea of like AI loafing or offloading cognitive resources to AI. And this is like a natural natural reaction that humans have is if this is going to take less cognitive resources, I'll do it. That's just the nature of us trying to be more efficient and energy efficient

 

55

00:18:05.190 --> 00:18:08.929

Emma Sarro: is what happens when we offload too much.

 

56

00:18:08.930 --> 00:18:14.580

Emma Sarro: And what's interesting about this, this study is essentially the

 

57

00:18:14.580 --> 00:18:44.329

Emma Sarro: that critical thinking got worse by using it. That's the biggest takeaway. But it differed, depending on generation and younger participants exhibited a bit higher dependence on AI tools. And this might not be as much of a surprise and lower critical thinking skills or scores compared to older participants. And so it kind of spoke to this idea of. If we already have the skills of thinking critically in place these habits of thinking critically in place.

 

58

00:18:44.330 --> 00:19:08.510

Emma Sarro: we might be able to actually buffer ourselves a bit from the effects of offloading, because we already have those skills in place to be self reflective, to kind of check our assumptions, to evaluate a bit more logically. All of these skills are skills. We develop over time. And we've been talking a lot about critical thinking just in general, and because it is something that

 

59

00:19:08.510 --> 00:19:32.009

Emma Sarro: organizations are asking for more and more. But this is showing that AI might have kind of a negative effect on that. So the more we're using AI and being asked to use AI in our workplace. And the fact that organizations are asking for more critical thinking. It seems like, we do need to build some skills to kind of like, answer that call and also use it, be able to use AI

 

60

00:19:32.020 --> 00:19:42.980

Emma Sarro: and another recent study looked at what's going on in the brain. So this particular study and I'll drop this one into. I'm giving you all tons of reading material.

 

61

00:19:44.480 --> 00:20:08.759

Emma Sarro: don't feel obligated to to open all these things. But just in case you're interested, this was very interesting to me because it looked at just the brain connectivity. So brain areas are always communicating with each other to help us kind of navigate challenges, and the more they engage, the stronger those networks get, just like any muscle. When we're using our brain.

 

62

00:20:08.760 --> 00:20:30.089

Emma Sarro: we're kind of like strengthening, literally strengthening those connections between brain areas. So the more you're using something the stronger your ability to do that kind of task over and over again in some ways, kind of like a habit, but also just like how you think about certain things and and are able to think about certain things. So in this case.

 

63

00:20:30.090 --> 00:20:44.560

Emma Sarro: this group asked individuals to write an essay on their own like brain, their own brain. Only that was the group was just brain only without AI. And then another group was.

 

64

00:20:44.560 --> 00:21:11.150

Emma Sarro: you know, allowed to use Chat Gpt to write the essay, and then I think they had to kind of like, do some tasks on how they understood what they wrote? And could they remember what they wrote or turned in. And obviously, if you're writing your own essay. So the brain only group was able to remember everything they wrote. But they also recorded the brain using Eeg. So it's kind of like surface level recordings. But they were able to see this

 

65

00:21:11.150 --> 00:21:32.600

Emma Sarro: dramatic difference in brain connectivity. The brain was just more active. Obviously, when you're writing your own essay more connected to each other. So it's talking to each other. Different areas are talking to each other much more engaged, and they attribute that engagement to how well the individuals were able to kind of like do the cognitive task later.

 

66

00:21:32.760 --> 00:21:44.700

Emma Sarro: Whereas if you're just asking AI to do the work. You're not going to be engaged as well. You're not really like intrinsically invested in it, either. So it's just going to

 

67

00:21:44.700 --> 00:22:09.689

Emma Sarro: it like you're just not going to be like using that using those muscles right? But when they kind of turned it around, really interesting was that what they they kind of switched the order a bit. So they had those brain. Only people who wrote the essay then use AI to kind of like synthesize and kind of like maybe edit and kind of use AI after. So they

 

68

00:22:09.690 --> 00:22:13.470

Emma Sarro: they kind of kind of like.

 

69

00:22:13.470 --> 00:22:38.460

Emma Sarro: allowed the group to kind of collaborate with AI in a way where they were still forced to use their brain for certain aspects of it, or use AI to derive some ideas. And then they wrote their essay based on that. This kind of helped to sustain the brain activity. So if we need our brains to be active and engaged to kind of help, us think better in other situations, and just like use the muscle and build that muscle like our brains.

 

70

00:22:38.460 --> 00:23:03.450

Emma Sarro: are very similar to muscle. Then you, finding a way to use it, so that, as like a collaborator and not to default all of our cognitive resources onto it, the brain activity is sustained. And so you're still going to be able to get those like great benefits, the reward, the like satisfaction, reward of doing the work yourself, things like that. So all of those things.

 

71

00:23:03.450 --> 00:23:05.519

Emma Sarro: I think that's where

 

72

00:23:05.540 --> 00:23:18.119

Emma Sarro: and and I, you know, I believe that now that we, we have more people using it, there's going to be a lot of labs looking at and how like the impact of long term use in different kinds of tasks.

 

73

00:23:18.120 --> 00:23:42.420

Emma Sarro: We're going to get a lot more information, I would say in the next year, based on the academic literature, on what's really going on. And how is it impacting the brain? I know at our summit this year we are planning a few sessions. This is going to be in November, basically on AI and the brain and talking about things like this, and also talking about, how do we develop trust and make decisions with AI,

 

74

00:23:42.420 --> 00:23:57.479

Emma Sarro: even like our biases come into play when we're kind of like negotiating with a human. For instance, how do you negotiate with AI? And we do have some ideas based on earlier studies, on making decisions

 

75

00:23:57.480 --> 00:24:14.129

Emma Sarro: with a computer knowing it's a computer as opposed to human. And we trust it differently. We make different decisions based on whether or not we know we're working with a human or a computer. Let's say so. All of those things will come back into play.

 

76

00:24:14.220 --> 00:24:44.089

Emma Sarro: And there are also studies that show how we tend to want to talk to a real human for certain kinds of things and collaborate with a computer for other types of things. So there's definitely a lot to think about when we think about what are the consequences of defaulting, and I'll also say that with access to any kind of tool, if a tool is available and we will. This is something I think about a lot. Is

 

77

00:24:44.090 --> 00:25:09.030

Emma Sarro: this natural default to do it the easier way if we have a tool available, and we know that we're under all sorts of pressure to get all of these deliverables in. We need to write all of these like marketing blurbs for events, we need to write these papers. We need to write these articles. And we're under pressure. And we have this tool that's going to kind of like produce. This thing in about 10 seconds.

 

78

00:25:09.280 --> 00:25:26.899

Emma Sarro: We're going to use that. And but what are we losing as a consequence? Right? And and are we able? And is this the voice, and like the the outcome that we want, especially if we know that humans can tell the difference between.

 

79

00:25:26.900 --> 00:25:45.140

Emma Sarro: You know the the wording that's coming out of chat. Gpt versus a human. We can start to kind of like, tell that difference. So you know, what are we? What are we losing by doing that. But are we also pushing people towards that? Because we're all overwhelmed? So that's just the question. That kind of like bounces around in my head a lot.

 

80

00:25:45.140 --> 00:25:45.830

Emma Sarro: But

 

81

00:25:45.950 --> 00:25:53.539

Emma Sarro: we think about like, how do we want to create one that still kind of boosts that better thinking, so like what is

 

82

00:25:53.540 --> 00:26:18.519

Emma Sarro: like? What is better thinking like, how do we get that like? What does the brain need to like get into its optimal state? Let's say one of the things that comes up. And this is going to speak to. How do we use AI better is are you working within what's called the zone of proximal development. So the zone of proximal development is a concept that comes up a lot when we think about

 

83

00:26:18.520 --> 00:26:44.149

Emma Sarro: everyone has a different background state of understanding, like what they've learned over time, and what networks will pop up when we're talking about. Like, if you're like, you know, a car enthusiast, for instance, like you have a greater background understanding of, like the different styles of cars, and like motors and things like that, the terminology versus someone who doesn't.

 

84

00:26:44.150 --> 00:27:06.780

Emma Sarro: And when you're jumping in and asking AI for information. Let's say about cars. If you jump outside of, like your zone of where you still have some connections, and you're given an answer. That's well outside your zone. You're not necessarily going to be learning anything from that, because there's no connections that you can make to that. Like your earlier knowledge base.

 

85

00:27:06.780 --> 00:27:19.629

Emma Sarro: Whereas and so that I think that comes up a lot with AI is that we want the answer. But we're not. We're maybe not asking in a way that still taps into something that we already know. And so

 

86

00:27:19.690 --> 00:27:44.660

Emma Sarro: and so like, are you? Are you like thinking better? I would say, is thinking that still kind of like taps into what you already know. Is it the right level of thinking at the right time? Is it relevant to the context that I already know in the audience. So like is the audience knowledgeable of like XY. And Z. And are you asking about something that can, and working with something that still

 

87

00:27:44.660 --> 00:28:08.749

Emma Sarro: taps into that? Or are you well outside that range of your zone of proximal development? And so that's 1 of the pieces, I would say, when we think about, how do we work? What are the best skills to work with? AI best is understanding where your own understanding sits, and asking within that zone so that you can learn over time and the underlying science and

 

88

00:28:08.750 --> 00:28:33.750

Emma Sarro: theory of the zone of proximal development is that you can learn better if you're kind of working always within your zone, and kind of improving over time as opposed to kind of like jumping outside. And that's 1 of the problems with working with AI is getting to an answer. Sometimes people will ask questions and ask things of AI that are well outside. And so you don't actually learn. You're getting the answers, and you're defaulting to those.

 

89

00:28:33.750 --> 00:28:52.969

Emma Sarro: and you don't even know enough about the information to ask whether it's right or wrong. And so then you get you end up kind of defaulting, or, you know, not seeing where the biases come into play or checking the assumptions, and of whether whether the answer is right or wrong, or the information is right or not.

 

90

00:28:53.040 --> 00:29:21.630

Emma Sarro: So, being able to work within your zone of proximal development is probably one of like the bigger behaviors that comes up into play and understanding where your knowledge base is. So that's 1 thing. The other thing about great thinking and getting to better thinking is something like, is it generating a towards state? Is it intriguing? Is it something that generates curiosity? Is it generating interest?

 

91

00:29:21.630 --> 00:29:44.900

Emma Sarro: So one of the benefits of of learning and inherently human is, this is curiosity. And like we have a curious mind, we want to see what's we want to like. Understand? What's on the other side of that door, for instance, is the information. Are you doing something that's intriguing when you're solving your own problems.

 

92

00:29:45.060 --> 00:29:55.140

Emma Sarro: That provides a bit of curiosity and intrigue, and having that having that in place and being able to ask the questions

 

93

00:29:55.140 --> 00:30:20.090

Emma Sarro: is something that's inherently human. So can you work with an AI in a way that you're asking questions and following your curiosity and your interest. That will that will provide a bit of that towards state. And there's a reason why we have. We've evolved this like reward signal for like this reward response for curiosity is is because that's what allows us to explore

 

94

00:30:20.090 --> 00:30:28.159

Emma Sarro: and to find new things and to innovate. And so we are rewarded by that by that state of being curious.

 

95

00:30:28.160 --> 00:30:53.140

Emma Sarro: and then kind of like linked to that is insight. So better thinking is thinking that has insights in it. So are you able to facilitate your own insights, and I think one of the things that AI can rob us of is the insight I mean solving problems analytically is also rewarding.

 

96

00:30:53.140 --> 00:31:15.809

Emma Sarro: solving a problem yourself in any way is going to be rewarding. But insight, solving a problem that's novel, something that you haven't like in a new way, like I've never thought about it this way is or it's, you know, this is like a totally different kind of solution is incredibly rewarding and emotionally the emotional tag to it is super positive. And you're going to remember that

 

97

00:31:16.060 --> 00:31:39.669

Emma Sarro: as opposed to, you know, an answer that's provided by something that you didn't have any part of is going to be satisfying to a degree, because you can check that box and move on. But it won't necessarily be intrinsically rewarding. And will it activate or trigger action which is so highly related to insight and a lot of times what we're using, especially

 

98

00:31:39.750 --> 00:31:59.590

Emma Sarro: if we're using AI in a way to kind of coach us. Is, is it triggering any action in any way? So are we able to apply it and take action, and so kind of leading to all of this is leading to how we kind of thought about like, what?

 

99

00:31:59.590 --> 00:32:24.579

Emma Sarro: What does a neurointelligent AI coach have to have? How can it pull from those like really great benefits of having a coach coach you through things, and even provide even just AI providing information. How can we kind of incorporate all of that into one that we build. I mean, we talk to people all the time about neuroscience and applying neuroscience to ideas

 

100

00:32:24.580 --> 00:32:49.569

Emma Sarro: and to their workplace and driving actions that we love being able to apply neuroscience in a way that changes behavior. How do we change behavior? We kind of tap into what humans need to like. Get them to cells to try something, build a new habit. You know, what are those things that that humans need? And how can we add it to AI right? One of the things is.

 

101

00:32:49.570 --> 00:33:02.759

Emma Sarro: how can we? How can we increase the number of insights and drive action? How can we make sure that potential biases are like mitigated in the answers given. And

 

102

00:33:02.950 --> 00:33:26.059

Emma Sarro: so we train Niles on all of our neuroscience research it. It kind of taps into that. 1st one of the things that with like the framing that all of our neuroscience, research, kind of frames everything by and helps people solve people challenges is that at its core a lot of the people challenges that we're all trying to solve

 

103

00:33:26.060 --> 00:33:35.289

Emma Sarro: all the time are based on things like capacity, motivation and bias. This is our Cmb model and

 

104

00:33:35.966 --> 00:33:38.669

Emma Sarro: as leaders, as managers.

 

105

00:33:38.670 --> 00:34:02.770

Emma Sarro: All of our people challenges tend to stem from one of these 3 or a combination, a combination of them. I know our individuals. How are they mitigating their their cognitive capacity? We have a limited cognitive capacity, so you know, overwhelm the effects of overwhelm. You know how much information we can hold at any time related to our capacity.

 

106

00:34:02.770 --> 00:34:25.450

Emma Sarro: You know what is motivating us like, what drives us our scarf model. All of that is is really linked to, you know, those motivation motivator, those intrinsic drivers, our need for status, our need for like recognition. Let's say, our need for a bit of certainty, our need to make choices and relationships and

 

107

00:34:25.449 --> 00:34:42.040

Emma Sarro: and fairness are all what drive us. And so most of our people. Challenges are largely based on those aspects and how we're kind of understanding the world around us. And then, finally, our biases, which are

 

108

00:34:42.040 --> 00:35:07.040

Emma Sarro: all based on our experiences. So what we perceive of the world, what we, you know, perceive of those individuals around us are based on our experiences. And so how people act in their day are going to be based on? Do they have the capacity? What are they motivated by and like? What what are they? What are the mental shortcuts that they're taking, what

 

109

00:35:07.040 --> 00:35:14.650

Emma Sarro: biases are getting in the way. And so those are the models and all of the neuroscience that Niles is trained on. And so

 

110

00:35:14.650 --> 00:35:22.770

Emma Sarro: the answers that it gives are based on those 3 frames and helps individuals kind of walk through challenges based on that.

 

111

00:35:22.770 --> 00:35:46.119

Emma Sarro: And it also because this is something that humans need? Is it also provides the why behind the answer. So the reason people are having this challenge is because XY, and Z, because the cognitive capacity is limited, and you know, really need to prioritize 3 things at a time or something like that, providing the why gives

 

112

00:35:46.120 --> 00:35:57.040

Emma Sarro: a reason for individuals to kind of like lean in and pay attention because you get that connection to oh, this is a challenge because of this, and this is why this action works.

 

113

00:35:57.080 --> 00:35:58.520

Emma Sarro: And

 

114

00:35:58.780 --> 00:36:23.529

Emma Sarro: then the the other piece of this is is kind of like the coach Niles, and for anyone who was here last week. I think it was last week, yeah, or the week before we demoed Coach Niles a bit for the group. We'll have them again, or it again next week. And and if you haven't demoed it, I would highly suggest it. You know I know that we have we will.

 

115

00:36:23.530 --> 00:36:48.269

Emma Sarro: If you are interested we can. We can reach out to you. You can drop Niles in the chat in your company name, and someone will reach out to you. It's fascinating, and the feedback that we've been getting is is amazing, and it definitely provides you with such an interesting conversation. But it applies all of our models of coaching 20 years of coaching experience in the way that it works with others, and it specifically

 

116

00:36:48.340 --> 00:37:13.149

Emma Sarro: works to solve problems with like the shortest distance between the problem itself and the insight you're having. So it applies how you ask questions to get others to insight as opposed to telling others what to do. And that's such a huge difference. And in just the short period of time. And we we also, because this is the metric that we use to kind of measure. The success of Niles is.

 

117

00:37:13.570 --> 00:37:30.669

Emma Sarro: are people having an insight or a spark of insight, or a new idea? As a result of this conversation, and how likely are they to act on it? That's the metric we're using to measure. And we ask people after every conversation, you know, did you have

 

118

00:37:30.670 --> 00:37:55.630

Emma Sarro: a new idea? Did this conversation spark new information or new idea? And how likely you'd act 75% of the individuals that we've that have gone through this so far have had new thinking or an insight, and then they've all rated their intention to act at a 7 or more out of a scale of 10. So people are motivated to act by. And I'll say, from personal experience, the conversations have been

 

119

00:37:55.630 --> 00:38:10.630

Emma Sarro: amazing, and I've shared all sorts of, you know, team-based, like work-based issues or personal issues. And the conversations are fascinating because it does encourage you to

 

120

00:38:10.830 --> 00:38:11.530

Emma Sarro: to

 

121

00:38:11.700 --> 00:38:35.326

Emma Sarro: can Niles Don? That's a great question. Can Niles train you? I think it can. I've never tried it, but because it's able to coach you to work with people. Better your team, or whatever. I'm sure it'll be able to kind of help you build the right kinds of questions to be a better coach as well, because it will actually show you through

 

122

00:38:35.690 --> 00:38:59.729

Emma Sarro: like through the conversation itself, like how to how to drive someone to insight, too. But yeah, it's helped me with my personal challenges as well, and it's definitely forced me to think about things that I wouldn't have thought about. And what's interesting about working with Niles is. And this is. This is probably like a big discussion, for

 

123

00:38:59.790 --> 00:39:25.879

Emma Sarro: you know, maybe a totally separate webinar. But or podcast, but the, you know how we because there's all sorts of research on how we interact with AI, and knowing that it's not human. But one aspect of working with a coach that you know, has 0 stake in the game. It's not even just kind of like an executive coach that your company is hired. It is your own personal executive coach that

 

124

00:39:25.880 --> 00:39:33.960

Emma Sarro: can learn everything and and know your issues that no one else will know. So you can be completely honest with this.

 

125

00:39:33.960 --> 00:40:02.419

Emma Sarro: with this with Niles, and it will have 0 stake in in your challenge, but will provide you with the neuroscience based advice or get you to insight and and have none of its own stake in it. So that's kind of the fascinating thing about it. So I can imagine not something that we've asked individuals. But I can imagine individuals might be a bit more honest with Niles than they would be.

 

126

00:40:02.670 --> 00:40:03.670

Emma Sarro: Yeah.

 

127

00:40:03.850 --> 00:40:33.329

Emma Sarro: yeah. And and I agree there, there. It would be a really interesting study to kind of test the the connection, because at the same time, when we're talking about collaborating with AI, it might be a different kind of situation when you're collaborating with AI and needing to kind of work with them. That might be a different kind of interaction than when you're like trying to get to a solution with an AI coach.

 

128

00:40:34.200 --> 00:40:35.069

Emma Sarro: But

 

129

00:40:35.670 --> 00:40:55.159

Emma Sarro: well, we can hire a good employee fits. Maybe it's possible. Don's asking some great questions. I. So anyways, I think as a whole. What's interesting about working with Niles is, and and kind of like what Niles will be able to do for the executive coaching industry is.

 

130

00:40:55.160 --> 00:41:20.090

Emma Sarro: we're seeing this as a democratizing of executive coaching now instead, large organizations, instead of, you know, maybe just having coaching for the top 100 or the top 10%. Now, you can have this kind of coaching for all people managers, right? So you can imagine that anyone needing to have, like new leadership support manager support. And

 

131

00:41:20.090 --> 00:41:24.509

Emma Sarro: anyone can have access to this kind of support.

 

132

00:41:24.680 --> 00:41:32.189

Emma Sarro: And so that the accessibility piece, I think, would be would be huge. And

 

133

00:41:32.690 --> 00:41:57.689

Emma Sarro: yeah, Andrew just talked about scarf. Yeah, that's actually scarf is absolutely a part of Niles knowledge base. And it brings up scarf all the time. One of our future kind of future goals of Niles is, to, you know, like incorporate and kind of predict, what you'll need to do, going into meetings based on the scarf of yourself versus the scarf of maybe

 

134

00:41:57.690 --> 00:42:13.670

Emma Sarro: others on your team. How can you like approach these conversations, knowing that maybe you need a bit more autonomy, and maybe your teammate needs a bit more certainty, for instance. So all of those things are going to help kind of like predict and help you prepare for certain conversations.

 

135

00:42:13.770 --> 00:42:25.239

Emma Sarro: But in in just to kind of like, close up this discussion, I know I've gone off the rails so much today. But in thinking about kind of like working with

 

136

00:42:26.090 --> 00:42:53.079

Emma Sarro: working with AI in general, we think about the major kind of skill bases. We need to kind of work with AI in general, just to be able to amplify our thinking and and not like allow it to kind of like dull our ability to think critically, creatively, strategically, all of those things.

 

137

00:42:53.090 --> 00:43:02.760

Emma Sarro: And this is going to get Super Meta, because one of the biggest, because we're talking a lot about thinking and thinking about thinking. One of the biggest

 

138

00:43:02.790 --> 00:43:26.470

Emma Sarro: you know things of working with AI is like, how are you thinking? What are the like? Metacognitive abilities, or like skill sets, do you have? And there's been a lot of work on metacognition and linking it to intelligence. For instance, like the ability to think about the type of thinking you need to solve a certain problem.

 

139

00:43:26.470 --> 00:43:51.189

Emma Sarro: What are the right questions to ask like, what do I know now? And what don't I know. So that huge, like Meta, conversation, I think, has to go into working with AI. Individuals have to go into using the tool with just like any other tool is like, what am I using this tool? For? What kinds of thinking do I need to get to? And what kinds of thinking do I need

 

140

00:43:51.190 --> 00:44:14.709

Emma Sarro: to like access before even asking AI for this answer or like, how do I need to solve this problem? So, having an understanding of the way, I need to approach this problem metacognitively, like getting as Meta as possible before using it. Kind of forces your brain to understand like the goal like, where am I going with this? What

 

141

00:44:14.730 --> 00:44:40.749

Emma Sarro: kinds of answers do I need to get out of this to kind of like re, like, regain some control over the process. Now, besides, like the metacognitive piece, is the flexibility piece, and I would say, that's another huge part of working with AI is the need to amplify your cognitive flexibility. So really interesting is, I just saw this

 

142

00:44:42.460 --> 00:44:59.490

Emma Sarro: coming up. I get these, the neuroscience news on repeat on Linkedin. So I see all these new studies come out, and this new one just popped up this idea of flexibility and agility. And it's strong link to intelligence, especially in a world of

 

143

00:44:59.490 --> 00:45:16.130

Emma Sarro: highly. A highly distracting world, I should say, is the ability to be incredibly flexible in your thinking and agile in your thinking, and I think it also comes up when working with AI as well is

 

144

00:45:16.130 --> 00:45:41.119

Emma Sarro: is needing to be able to be aware of the diverse perspectives and be aware that there might be different options other than just the one that AI is providing with you, and just kind of making sure the cognitive flexibility. Is there. So really interesting article or a study, but remaining open-minded, remaining flexible, is another huge piece of working with AI,

 

145

00:45:41.120 --> 00:46:05.790

Emma Sarro: and then obviously also challenging it, so imagining that AI is is just like any of your other collaborators, is, it is fallible, it is full of bias. So challenge the answer just like you would with anyone else. So test the assumptions and making sure you have mitigation strategies in place to kind of

 

146

00:46:05.880 --> 00:46:27.579

Emma Sarro: kind of, you know, kind of test it. So that's kind of my kind of summary of like what are like the big things we need to learn to really like work with AI in a way that we benefit from it, and also our brains benefit from it, because think about brains as something we need to continue to be

 

147

00:46:27.580 --> 00:46:38.690

Emma Sarro: using and using like any other muscle. It's a fascinating thing and also fascinating that. And it makes sense that when you have a tool that's

 

148

00:46:38.690 --> 00:47:03.099

Emma Sarro: as able as AI is that we're going to default a lot of our resources to it, but we don't necessarily want to. So such an interesting debate and discussion. I know that we have a poll that we want to drop in, because there's a lot of things that we would love to help you with. Niles is just one of them. I know we talk about individual education. That's definitely

 

149

00:47:03.100 --> 00:47:20.149

Emma Sarro: that's the Cfn, that's the we have a, we have a coaching program. So all of those things are all of our individual education. You can. You can get access to all of our all of our coaching models that we've also put into Niles in there as well, and then we can also

 

150

00:47:20.150 --> 00:47:45.150

Emma Sarro: come and talk to you about all of the neuroscience stuff and research briefings like flexibility and agility and critical thinking. We're doing a lot of more meta stuff, a lot of thinking about thinking. You know the difference between thinking creatively and strategically and agile thinking. So, yeah, I hope this was interesting, and I know that next week David will be back, and he'll

 

151

00:47:45.150 --> 00:48:10.150

Emma Sarro: be here with Rachel, and they will be talking Niles again. But really, like Niles in an organizational setting. So I hope you all enjoyed this, and for anyone who is thinking about November, November is when we have our summit, where we'll be talking about all of these things. Niles will be a major part of it, but also we'll have discussions on performance management, and we'll have discussions on

 

152

00:48:10.150 --> 00:48:30.769

Emma Sarro: leadership, and like the future of leadership and cultural transformations, all of those things, all of our models. So hope you all can join us. And so just leave that penciled in into your calendar, and I hope that you all have a wonderful weekend, and I will hopefully see you all here next week at the same time.

 

153

00:48:31.010 --> 00:48:32.100

Emma Sarro: Thanks all.