AI agents are continuously evolving in ways that increase their potential to support our work. Not only can they synthesize long meetings and moments later provide you with key insights, ideas, and actions, they can also write your emails, plan an upcoming conversation, and make difficult decisions for you. However, does AI actually improve your ability to do these things, and is the output better than what a human could do alone? For example, even though you have the meeting notes, will you actually understand what was discussed and be able to make good decisions from the summary? Will it help you identify the right path forward and give you the motivation to take steps that might be challenging? Research shows that if we rely on AI too often or use it in the wrong ways, it could actually dull our thinking. Join Drs. David Rock and Emma Sarro as they discuss what “better” thinking means from the perspective of the brain and how AI can be used to get there. Learn the key behaviors that can turn AI from merely a tool into a collaborator.
WEBVTT
1
00:00:03.220 --> 00:00:05.180
Emma Sarro: Alright! Welcome.
2
00:00:05.930 --> 00:00:09.180
Emma Sarro: See everyone joining Happy Friday.
3
00:00:10.580 --> 00:00:17.840
Emma Sarro: Alright! Well, as everyone is joining, please drop in the chat where you're coming in from today. You know the deal.
4
00:00:19.050 --> 00:00:24.010
Emma Sarro: Nice, Dallas Joe, you're the 1st one. Very nice.
5
00:00:25.960 --> 00:00:46.890
Emma Sarro: all right, so welcome all to another week of your brain at work. Live. I'm your host today, Dr. Amasaro. I am the senior director of research here at Nli. Been here for almost 4 years. I'll let you know when I get to my 4 year, mark. We're happy to have any regular back and welcome to all newcomers. We're excited to have you here with us for the 1st time today.
6
00:00:46.890 --> 00:00:56.210
Emma Sarro: So today's episode will be diving into the world of thought, and how just the process of thinking and how we engage in our own thinking has
7
00:00:56.210 --> 00:01:00.439
Emma Sarro: been revolutionized by adding our new thought partner, which is AI.
8
00:01:01.000 --> 00:01:23.790
Emma Sarro: So we suggest, in order to really fully get everything out of this conversation. Please move your phone away. Take your garmin or your apple watch off, and maybe close any windows that are in the way, so that you can really completely focus on the conversation. It'll be a good one. So you know, myself, I'll be moderating the episode today. I got my Phd. From Nyu and head the research team here at Nli.
9
00:01:23.790 --> 00:01:32.319
Emma Sarro: I'm excited to welcome in our guest. You all know him very well, hopefully coined the term neuroleadership when he co-founded Nli over 2 decades ago.
10
00:01:32.320 --> 00:01:46.719
Emma Sarro: He's got a professional doctorate, 4 successful books under his name, one on the way, and a multitude of bylines ranging from the Harvard Business Review to the New York Times, and many more welcome in David. How you doing.
11
00:01:46.720 --> 00:01:50.000
David Rock: Thanks, Emma. Good to be here with you. Happy Friday. Hi! Everyone.
12
00:01:50.000 --> 00:01:50.793
Emma Sarro: Happy Friday.
13
00:01:51.230 --> 00:02:18.070
Emma Sarro: So I hope you're all ready for a very meta conversation today. We're talking and thinking about thinking the quality of our thinking. And really, how does AI fit into this? So AI. And as you, I'm sure, are all very aware, the conversation goes in both ways. So there's plenty of research, showing how much AI can help our thinking increase the quality creativity in different ways.
14
00:02:18.070 --> 00:02:39.019
Emma Sarro: At the same time, there's plenty of evidence suggesting that it can also dull our thinking in some ways. So maybe a bit of both. But maybe it's really, how do we use it? And really figuring out the best ways, best ways to use it. So really the stuff of thought. And that's the conversation and the title. And I think this came from an experience that you had, David. So why don't you tell us about it?
15
00:02:39.020 --> 00:03:04.830
David Rock: Yeah, the title came to me this sort of big Aha! Moment I had in January I was at Davos, in Switzerland, the World economic forum. And there was a really interesting group that formed. There's a lot of interesting groups, such a fascinating event. People think it's about presidents and economists. It's really about change agents and kind of impact investors and impact entrepreneurs and
16
00:03:04.830 --> 00:03:17.260
David Rock: people trying to do big things. And I was in this really interesting community digital community I was invited into that was looking at AI actually, and was looking at kind of how to think about AI, so that
17
00:03:17.310 --> 00:03:33.869
David Rock: we take into account kind of ethics and values, and you know, do it the right way. And the list was phenomenal of people on the call right? There was like really really critical central people in the AI universe. So I was, you know. I was excited to join the call. And
18
00:03:34.280 --> 00:04:00.680
David Rock: and I, you know, logged in, joined the call, and I saw there was, like, you know, 8 people there. I was like awesome. I can't wait to have this conversation, but I mostly kind of want to listen and sort of see where things are at. And you know the call started. And basically, there was 2 people and 6 AI agents who were recording what was happening right? And so basically, everyone had sent their had had sent their, you know, summary tool into the meeting.
19
00:04:00.780 --> 00:04:30.259
David Rock: So summary tools are kind of early agents. Really early AI agents. They're doing a task for us. There's lots of kinds of AI agents, but lots of us are using those. And even Zoom has one and all this so essentially in this meeting, we're supposed to have this really amazing conversation. And there was just no one to have the conversation with, and everyone hoping AI could summarize what happened, and we ended up having a bit of a conversation. But it was really clear that it wasn't all that useful. And the most interesting thing, though.
20
00:04:30.300 --> 00:04:37.100
David Rock: that came out of it, even though it's pretty interesting to see that the most interesting thing was actually reviewing everyone's summaries
21
00:04:37.450 --> 00:04:58.849
David Rock: and in reviewing everyone's summaries I was like this was a completely different meeting, like depending on which AI did the summary like. They literally structured the summary completely, differently, with different sort of main outcomes, different themes, different kind of architecture of the ideas. It was like. They're in completely different meetings, and that I mean, that's the way humans process meetings. But you don't see it
22
00:04:58.850 --> 00:05:11.530
David Rock: as clearly. I've always said, you know, if you're in a meeting with an accountant, a lawyer, an engineer, a marketing person, and a salesperson you're in like you're actually in completely different universes, even though you're in the same meeting. You're actually hearing different information.
23
00:05:11.620 --> 00:05:15.699
David Rock: But it turns out that's even the case with AI. So I was sort of left
24
00:05:15.810 --> 00:05:18.129
David Rock: thinking, like, you know.
25
00:05:19.120 --> 00:05:26.060
David Rock: that for a conversation between people to be useful, there's a certain like minimum viable product. And you can say it's.
26
00:05:26.290 --> 00:05:51.360
David Rock: it's like a number of people have to be there. That's kind of obvious, right? There's certain kind of quorum. But it's more than that. There's a certain amount of attention that needs to be paid for it actually to be useful. And it just got me thinking what is like the stuff of thought, like what what is good thinking in a meeting context that actually makes it useful, being there. That was sort of where the idea came from. And then we've been sort of iterating on that since.
27
00:05:51.700 --> 00:06:19.409
Emma Sarro: Oh, it's so interesting! I mean, you're bringing up so much about just social interaction. And you know, who do you place? And when we talk about this in other contexts? Who do you place on a team? How do you make sure you get the right perspectives together? And if you're pulling in perspectives. Do you also make sure you pull in the right agents? And in in the agent Summary, you know. What is that pulling from as well, which is, which is really speaking to how AI is synthesizing information.
28
00:06:19.610 --> 00:06:40.260
David Rock: Well, let's let's focus on the meeting itself first, st and then we can talk about kind of what this means. AI really is right, but certainly AI agents like that. But the 1st thing that we started to wonder about is like what happens when we are all there, and we do have a good conversation like, what is the stuff of thought? What is that useful stuff?
29
00:06:40.370 --> 00:07:07.810
David Rock: And it became clear, like a critical variable is is, firstly, attention, like the people are paying close attention. You think about how much attention you pay. You know right now in this session versus if you're listening to a recording right? Anyone on the line like, you know, you're focused a lot more you might do. You might be doing some multitasking shame on you, but you'll still be paying more attention than if this was like a recording later. And this is a function of a number of things. There's the positive social pressure.
30
00:07:08.150 --> 00:07:23.110
David Rock: If there was an opportunity that you thought you might be called on at any moment to even comment in the chat, you'd pay even more attention. But there's a synchrony that happens. There's a neural synchrony that happens when you're focusing on a similar idea with more people
31
00:07:23.280 --> 00:07:49.019
David Rock: that actually enriches attention. And so what does all that do. You're focusing attention. You're activating circuits in the brain. You're activating circuits really across the whole brain. But it's prefrontal for working memory. It's visual, the occipital lobe language centers lots of things, and the stronger that network is the more you get what's called spreading activation, right? So spreading activation is essentially the stuff of thought
32
00:07:49.020 --> 00:08:09.200
David Rock: spreading activation is, you go? I'm supposed to travel at the moment. Where am I going again? Oh, that's right, Africa. Oh, that's right. I need to get my shots right. It's like it's like implications implications. Right? You start with one thing. Now, the stronger that one thing is right, the easier it is to connect to the next thing and connect to the next thing.
33
00:08:09.210 --> 00:08:30.099
David Rock: So spreading activation is literally, like, you know, making connections to either past memories or current information or just possibilities. Right? And it requires attention. It's helped by a quiet brain, and it's helped by synchrony of lots of people thinking the same things right? So you don't really get that listening to a recording.
34
00:08:30.360 --> 00:08:39.829
David Rock: And you don't really get that unless you're really, you know, paying close attention. So the value of a human to human interaction
35
00:08:40.030 --> 00:08:43.079
David Rock: is actually this spreading activation of
36
00:08:43.110 --> 00:09:11.569
David Rock: seeing things. Now, sometimes those result in insights. That's great. That's a big. That's a big thing, you see, right? Very energizing. But sometimes it's just implications, applications, you know, errors in thinking. But essentially, you need to extrapolate ideas. Pull them apart, see them from other angles, and it doesn't happen listening to recording. And it doesn't happen reviewing an AI summary. And so all of this said to me like, Okay, just even in the world of like AI summaries, we need some kind of
37
00:09:11.660 --> 00:09:25.109
David Rock: 1st principles to kind of, you know, work within starting with. You know how many people have to be at the meeting through to how much attention you have to be paying this kind of thing. So that was, that was sort of the 1st kind of insight that came from that.
38
00:09:25.480 --> 00:09:50.480
Emma Sarro: Yeah, it's so interesting. And I think you make a lot of sense. And it really suggests and really highlights how we've evolved to be social right? So the social brain engages, and the more of that that engages, the easier we are able to learn about what's happening to activate those other networks that might be stimulated by some different idea. And someone asked in the chat around, virtual versus in person.
39
00:09:50.480 --> 00:10:01.959
Emma Sarro: And you still have that same kind of activation. Because you are so used to interacting with another human, you can pick up their social cues. You're paying attention to them. So again, it goes back to that attention.
40
00:10:01.960 --> 00:10:02.310
David Rock: Yeah.
41
00:10:02.310 --> 00:10:03.399
Emma Sarro: Are you focused on them?
42
00:10:03.400 --> 00:10:29.419
David Rock: Yeah, no, it's interesting. I mean, if you're on camera, right, you can see each other clearly, and you're not being distracted by something else like no one's being distracted by something else. It's just as good, maybe better, because you can see everyone's face really clearly. Right. Turn off self view, hide self view, you'll have even more attention. It's even better. In fact, I'm going to hide mine right now. That's so much better already, so I don't see myself now. But virtual is fantastic. As long as everyone's on camera, no one's multitasking
43
00:10:29.420 --> 00:10:36.249
David Rock: people are really present, actually, in a room together, people actually multitask more. They just like quietly look down at their phone and other stuff.
44
00:10:36.330 --> 00:10:52.719
David Rock: But you're sort of more focused in some ways on platforms. So that's an interesting. It's an interesting point. But you want that intensity of attention, and it comes from feeling like you're observing things with lots of other people. That's part of it. It's a neural synchrony
45
00:10:52.720 --> 00:11:17.709
David Rock: that actually enriches your attention circuits because you feel like you're in an experience with others. You get a similar thing from like dancing with people together from, you know, being in a meeting together, watching a concert together like any of these different things, you get this neural synchrony of being with other humans doing similar brain things that actually helps you focus attention. So that's part of it. If there's positive social pressure of feeling like there's a test later, or you could be called on.
46
00:11:17.710 --> 00:11:31.520
David Rock: That'll also help. But all those things enrich attention and attention enriches spreading activation. So that's basically kind of the foundation of the stuff of thought. But the second thing that came out of this that I want to dig into just a little bit was
47
00:11:31.580 --> 00:11:48.830
David Rock: this this sort of other insight about looking at all these different AI summaries. And it was quite terrifying. I really encourage you to do it when you're in a meeting, get multiple different kinds of ais to summarize the meeting, and you'll be like, Wow, we got to be really careful, like believing that AI is like
48
00:11:48.890 --> 00:12:02.869
David Rock: the, you know, summarizing accurately, you know, it's summarizing one perspective. And it came. It sort of came to me that the current AI tools current gen AI tools. Right? AI is a lot of things current Gen. AI tools, whether it's Chatgbt or Claude or
49
00:12:02.910 --> 00:12:20.130
David Rock: Grok. Whatever these these kinds of tools, they're basically like a very, very, very smart, savant friend, good friend, who'll do kind of whatever you want, but with every fallibility that humans have. In fact, they have all of the fallibilities that humans have all put together.
50
00:12:20.250 --> 00:12:25.889
David Rock: because, in fact, what these tools have done is literally digest all of human communication.
51
00:12:26.040 --> 00:12:46.429
David Rock: and so they are as fallible as humans. They are as biased. They are, as you know, mistaken. They are just as human on many levels than humans. They just happen to be able to go much faster processing a lot of information. So it's like a very savant friend, very good at kind of particular kinds of processing.
52
00:12:46.440 --> 00:13:08.620
David Rock: But you've got to remember that they have just as much biases as everyone else and everything else, in fact, probably more. So. That's a really important insight when you're looking at these different tools. So something really matters. Don't ask one AI like, ask 5 ais and see what really comes out to start to get a sense of what might be the sort of truth there. It's kind of an interesting insight in itself.
53
00:13:08.620 --> 00:13:34.859
Emma Sarro: Yeah, I love that analogy because it really helps you to reframe. And when we pull information from AI, we tend to just kind of default to it. It makes it easy to see how we all tend to default to it. And we've seen the evidence of when people use AI too often, and they tend to offload those cognitive resources. So just like anyone else that you're working with, you challenge their thinking. So we can challenge AI's results as well
54
00:13:34.860 --> 00:13:40.249
Emma Sarro: challenges by asking multiple ais for multiple perspectives. I think that's important.
55
00:13:40.250 --> 00:13:46.539
David Rock: It's a really important thing to remember that human that AI has the fallibility of all humans put together.
56
00:13:46.670 --> 00:14:00.219
David Rock: and the humans are unbelievably fallible like some of my favorite books, some of the most important books you'll ever read, things like, you know books about how wrong we are. You know I've forgotten all the titles. They'll come to me in a minute. But you know
57
00:14:00.330 --> 00:14:21.889
David Rock: there's a whole bunch of books about just how wrong you know mistakes were made, but not by me, is one of my favorites. Mistakes were made, but not by me. A great book, and there's like a dozen books like this on kind of the way we make mistakes. There's 1 called How We Decide by Bob Burton. Fascinating book, showing how random? Oh, no, sorry, not how we decide it's called on being certain.
58
00:14:22.060 --> 00:14:45.310
David Rock: It's called on being certain. And it explains the physiology and the neuroscience and the psychology of just what certainty is, what a feeling of being right is, and how it's nothing to do with whether we're actually right. It's just a feeling. And all of this right? So so really, really important to remember just how fallible humans are, and that AI is just as fallible. And take that, let's take some of the questions. There's some really good questions coming in.
59
00:14:45.310 --> 00:15:02.610
David Rock: and some comments that we might answer. So I love Maura's comment that even knowing something will be recorded will make you pay less attention, even though you're not going to watch or listen to the recording. Right? So interesting. So it's like, it's like an out somehow that decreases taking notes
60
00:15:03.130 --> 00:15:06.029
David Rock: can be fantastic if it doesn't mentally take you out
61
00:15:06.250 --> 00:15:16.589
David Rock: in a paper much better than digital, because it you're actually having to work harder to make that note and you're increasing the attention density of the circuit. So like
62
00:15:16.590 --> 00:15:36.579
David Rock: spreading activation is stronger if you write something than if you think something spreading activation is stronger if you speak to people about something than if you write something right? So it's like thinking, writing, speaking right? Speaking about something makes the spreading activation happen much more even than writing. But writing is better than just listening.
63
00:15:36.670 --> 00:15:38.630
David Rock: So that's an interesting thought.
64
00:15:38.630 --> 00:15:49.620
Emma Sarro: Along the same lines of teaching. So if you're teaching something, you have to form that thought, and you have to form the thought in a different way. So you're looking at all those sides. So that's why, when we teach something, we actually learn it better.
65
00:15:49.620 --> 00:16:00.069
David Rock: Yeah, well, you're imagining other people processing it, which makes you see from multiple perspectives, which is which is really interesting. So excellent, or any other question you want to cover there. Should we keep going.
66
00:16:00.070 --> 00:16:17.059
Emma Sarro: We talk a bit about the value of participating in a meeting versus. Not so. Does that so, like, you know, asking a question, or in that meeting live versus not. I guess it's the the like. The interaction piece that pulls your attention in.
67
00:16:17.060 --> 00:16:33.440
David Rock: Yeah, I mean, look, what happens is the brain's default network. Literally, the network that's kind of like the the hum like the neutral in a car of the brain, like the thing that's always on is the social network of the brain. And another way to think about that is that
68
00:16:33.440 --> 00:16:59.169
David Rock: the brain's natural state of thinking is people interacting with each other. Right? So basically, that's the easiest way to like slide something into your brain in a way right? If you want to slide something into your brain, put it in the form of people interacting with each other. So in other words, when you're in a meeting, live. And you're seeing people interact in stories your brain encodes. Who was there? What the dominance was, how people were treating each other like all this social information encodes with the meeting.
69
00:16:59.340 --> 00:17:23.639
David Rock: Another way of saying more spreading activation. Right? So social interactions activate a lot more of the brain than interactions that just feel like data and stories like after the fact. So it's another way to sort of say this, say the same thing. But let's go on to the second question, I think that I just want to sort of share, like where this idea came from, that we need to actually be respecting the sort of minimum viable product of interacting and thinking together
70
00:17:23.640 --> 00:17:31.059
David Rock: and not just assuming, because, you know, a few people were there and everyone else heard the recording that it's doing anything like it might be doing nothing at all.
71
00:17:31.450 --> 00:17:42.920
Emma Sarro: Yeah, absolutely. Well. So another thing you've mentioned in the past is just that AI is going to be this interface breakthrough. But can you explain what you mean by that, as it's evolving.
72
00:17:42.920 --> 00:17:53.579
David Rock: It's an important point, and I think this will give you some foresight if you're interested in just understanding where it's going and how to sort of see the future a little bit.
73
00:17:53.690 --> 00:18:22.949
David Rock: And this insight came to me a few years ago, and it's sort of coming true now, and it came from just understanding the limits of human working memory. Right? So human attention and working memory is quite limited. We can only focus our attention on one thing at a time. We can hold in mind like 3 items in any set without a huge amount of effort. 4 or 5. It's very takes a lot of attention. 6 or 7 is pretty impossible, right? So the challenges of human working memory and human attention.
74
00:18:23.150 --> 00:18:43.670
David Rock: Very similar attention requires working memory. Working memory requires attention. The challenges of that. We think of them as capacity issues and human capacity issues are the reason we invented AI right? We kind of invented AI, because we can't hold much information. And if you, if you look at the history of computing.
75
00:18:43.920 --> 00:19:13.110
David Rock: there's been this really interesting, there's sort of 3 breakthroughs to sort of to track. If you could. Graph, this would be really interesting. There's 3 breakthroughs. The 1st breakthrough is like, basically how much less capacity you need to get from wanting to do something to doing it. Using technology. Right? So think about the very, very 1st computers, punch cards literally holes in cards. Right the attention you need to produce that card and then do that process. It was like a week of work.
76
00:19:13.110 --> 00:19:23.929
David Rock: It's like a week of attention. So so a lot of attention. Not many uses for that Punch card process, like maybe dozens or hundreds of uses of that right and not many people doing it.
77
00:19:24.000 --> 00:19:51.709
David Rock: So you've got very high attention, very low uses and very low usage. Right now what happened is we went from punch cards to like machine coding. You had. You went from a week to like days of processing time. Right? You had a lot more uses for technology and a lot more usage of people right? And then you went from machine coding. You went to Wysiwyg bit of a jump. But you went to basic language first, st which was better than machine coding, writing in words.
78
00:19:51.750 --> 00:20:03.259
David Rock: Then you went to Wysiwyg, which was the 1st Macs and the windows and the you know all this. So what you see is what you get. So that required way. Less cognitive load. Suddenly you went from like a week
79
00:20:03.520 --> 00:20:11.910
David Rock: to, you know, to days, to to moments of like wanting to open a file, and you know, edit a document and send it
80
00:20:12.220 --> 00:20:24.769
David Rock: right. It was like minutes now. It used to be literally being, you know, a week, so the attention was a lot less. Now. The uses expanded like breakthrough in number of uses and breakthrough in the number of amount of usage.
81
00:20:25.110 --> 00:20:51.419
David Rock: but from Wysiwyg to touchscreen is actually a big jump. So the touchscreen, the iphone kind of started that off from like Wysiwyg with a mouse and key to touchscreen anywhere was a big jump in a big decrease in the amount of capacity you needed. Right? So now, you could, like literally, you know, organize a car like an uber with like 2 clicks right? Or order something on Amazon with 2 or 3 clicks. It's like seconds.
82
00:20:51.600 --> 00:21:10.179
David Rock: right? It went from minutes to seconds. So how much attention you needed plummeted right? And suddenly, of course, the capacity that sorry the number of uses, you know, went crazy, and the number of people went crazy. Now, it's like 7 billion or something, right? Have a touch screen. So this is interesting. And so you're like, well, what's next? Well.
83
00:21:10.460 --> 00:21:20.579
David Rock: if you remember that every one of these breakthroughs has been about requiring less and less effort from our limited prefrontal right, because and the reason for that is prefrontal effort
84
00:21:20.670 --> 00:21:39.889
David Rock: is tagged in the brain like a threat. So like having to do something having to pay attention in the brain is a threat response. Right? It's like, Oh, that's that's effort. Right? As Daniel Kahneman said. You know, thinking is aversive. Right? So so basically, you make a huge amount of money when you help people not have to think.
85
00:21:39.910 --> 00:22:01.350
David Rock: Last century it was all labor, saving devices, all these labor saving devices, you know, built the world's economy this century. It's been mental labor saving devices, right building the world's economy. Every new app, every big idea, is basically shaving. We're going from 10 seconds to 2 seconds of mental effort to do something. And suddenly, it's a huge breakthrough. Right? So
86
00:22:01.650 --> 00:22:20.779
David Rock: so you think about this, you're like, okay, well, what's the next frontier we can. We can go from like needing something to having it in a couple of presses. We must be at the limit. No, the next thing is that these agents actually thinking ahead for you? Right? So this next step is like, Hey, every other day at 5 o'clock, you seem to order an Uber. I'm going to order the uber for you
87
00:22:20.780 --> 00:22:34.730
David Rock: and have it there ready, because I've noticed the pattern right. And I'm going to also make sure that foods at home that you seem to order. And I'm gonna see the pattern I'm going to vary the food for you. And I'm gonna you know, make sure that that's there. So it's actually noticing patterns
88
00:22:34.730 --> 00:22:42.759
David Rock: and thinking ahead for you. That's where we're going with this. And the reason that that will be successful is that because everyone wants to pay to have to think less.
89
00:22:43.070 --> 00:23:11.219
David Rock: And it's not a bad thing to do, because when you save that thinking on routine tasks. If you don't spend that thinking on routine tasks like if you don't use your brain to schedule meetings for 3 h, you've got a lot more brain space to think deeply about stuff 5 years from now, right? But if you spend your brain on all these 100 decisions about scheduling, if you don't have an assistant. It's very hard to think about 5 years out, right? So anyway.
90
00:23:11.350 --> 00:23:16.670
David Rock: it's not a bad thing to cut out kind of pointless thinking. So where we're going with AI. Is that
91
00:23:16.970 --> 00:23:19.929
David Rock: predicting and then taking actions for us?
92
00:23:20.280 --> 00:23:26.040
David Rock: Right? So you imagine it won't be long before there are robots in every home. That's like a couple of years away in many, many homes.
93
00:23:26.180 --> 00:23:54.069
David Rock: It's going to be noticing how you like your coffee. It's going to be noticing how you like your house. It's going to be noticing how you like your laundry. It's going to be predicting that stuff. That stuff's going to be delicious, right? Oh, my God, I don't have to pair socks for the rest of my life. Amazing! Right? I like all that cognitive load I have a belt. I'm hanging onto it with a thread. It's like it's got to be thrown away with this one belt that just like it's this particular buckle I can't find anywhere in the world, but it just literally tightens and loosens with a little clip. It's like
94
00:23:54.070 --> 00:24:14.270
David Rock: it's like 5 seconds less attention every time I use it than any other belt, and yet I can't get rid of it, because that little bit of attention I notice it right? So we're going to get very addicted to all these tools that help us not have to think. And there's going to be huge industries built out of that. But it's going to essentially be about
95
00:24:14.300 --> 00:24:29.339
David Rock: the limits of attention and the limits of working memory. And that's what the next ais are really solving for. I think that's that's where Agentic AI is going. So that's if you understand the brain, you can kind of understand kind of why I why, I kind of see this coming.
96
00:24:29.670 --> 00:24:58.880
Emma Sarro: Yeah, and we'll easily offload these to AI. That's what we naturally do. If we can find something seamless and not think just like any of our when you hear us talking about our biases. That is the result of this kind of evolution, as we default to things. If we can default to AI. That's great. But then some of this has come up in the chat a bit is yeah, Kath. Thank you. Where are the perils? Right? It's almost like you're thinking with us, Kath.
97
00:24:58.950 --> 00:25:05.480
Emma Sarro: What? What are the downsides of of offloading? Because we will offload onto AI?
98
00:25:05.820 --> 00:25:07.240
Emma Sarro: Where are the downsides of this.
99
00:25:07.240 --> 00:25:09.710
David Rock: Yeah, yeah, I think there was a there was an ad
100
00:25:09.760 --> 00:25:21.239
David Rock: somewhere. I can't remember. Where is it? I think maybe somewhere in in the Nordics. There's an ad about a guy who, like gets locked out of his house like gets, you know, maybe buys glasses or something, and, like, you know.
101
00:25:21.260 --> 00:25:43.090
David Rock: Can't like like the AI won't, won't like recognize him, and and won't let him into the house, and he's out in the rain and like his whole life's over. So I mean, there's going to be sort of weird risks like that. That I think are, you know. Maybe just kind of funny will happen now and then. But that's not the I think the major risk. I think that there'll be not many risks from
102
00:25:43.430 --> 00:26:03.949
David Rock: you know, from fundamentally like you know your car, knowing it's you and the door opening for you, except that you want to. You want to have a manual override in an emergency. Right. So we've got to remember those manual overrides really, really important. I can't get into my car at the moment. It's like got an electric key that neither fob will work, and there's literally no way to get into it. I'm like.
103
00:26:03.980 --> 00:26:18.209
David Rock: whose idea was that. So we need the manual overrides, but otherwise the the sort of we won't miss like having to like. Get a key out of our pocket, and you know, and and put it in there. The challenges are more
104
00:26:18.590 --> 00:26:39.629
David Rock: when we start to get to slightly higher order. Thinking and AI is, is essentially like writing out messages for us. It's like solving our problems for us. I think those are some of the challenges, and we've already seen. I'm sure most people on the line have probably already seen that research starting to come out
105
00:26:39.840 --> 00:26:59.619
David Rock: about there is definitely a downside to offloading and really surprised. I've been really surprised how quickly we've seen the evidence of this, and how many friends and colleagues I've had who've said, Yeah, I'm definitely getting cognitively lazier. You want to speak to that, Emma. What have you seen in the research and kind of what do you think that's about.
106
00:26:59.620 --> 00:27:24.589
Emma Sarro: Yeah, yeah, there's definitely, there's a pretty big study. And there have been other people that have been talking about this, too, that have really shown that one of the downsides of using AI too much is by cognitively offloading onto AI to do some of our critical thinking activities. We've gotten worse at critical thinking and critical thinking. And there are some experts in the critical thinking field. And this is specifically around this kind of thinking
107
00:27:24.590 --> 00:27:49.590
Emma Sarro: have really focused on this as this is a very human skill to have. And if we offload it onto other devices like AI, then we're losing the ability to do something very human, and so there has to be a way for us to separate and use AI in the ways that it can help us, but not take away some of the things that are inherently human. So the creativity we've talked about creativity, too.
108
00:27:49.590 --> 00:28:17.669
Emma Sarro: Our last summit. We had a big discussion around creativity, and that is also something that is inherently human. And while AI can be used to help some of our creative thinking, it doesn't necessarily make us more innovative or more creative. We have to use it in the right way and within the zone of of where we currently sit, we can't necessarily expect it to to jump us up to a higher creativity zone than
109
00:28:17.760 --> 00:28:21.280
Emma Sarro: than we already start at, so there are best ways to use it.
110
00:28:21.340 --> 00:28:25.509
David Rock: I think I think what's really important is to start
111
00:28:25.670 --> 00:28:35.989
David Rock: like is to have a foundational kind of principle. I'm just having an insight, you know real time here that you've got to start with a foundational principle that the ais are very fallible
112
00:28:36.160 --> 00:28:47.699
David Rock: and just like one AI like is not just not a great answer, Meg, so so ais have all the fallibility of humans, and, secondly, that humans are very, very fallible.
113
00:28:47.880 --> 00:29:12.579
David Rock: We're very fallible because of just limited prefrontal resources, right limited self regulation, limited social cognition. So we can't think about much. We can't manage our emotions very well, and we can't understand other people. Very well, those are the 3 sort of banes of our life. So if you come from the place that the ais are fallible and humans are really fallible. Then you start to think about how to use AI a little bit differently.
114
00:29:12.610 --> 00:29:29.590
David Rock: right? And you want to actually use, like the right way to use AI, and there was a piece in fast company just recently about this. You know, you want to use AI to actually, you know, obviously help you lift up to more strategic thinking, more higher level thinking like, that's 1 thing.
115
00:29:29.670 --> 00:29:36.789
David Rock: But I think it's even more important that you use AI to actually make you smarter, like to actually improve your thinking.
116
00:29:36.940 --> 00:29:54.109
David Rock: And to do that, you're going to have to have a lot of trust in the AI itself, because you're basically going to have to like, let AI give you what we call nudges to make you smarter. And I think AI can actually make you smarter
117
00:29:55.420 --> 00:30:05.609
David Rock: and and and I think that's what's really important is that is that we need to respect that that you, that that smart comes in part from you, having insights
118
00:30:05.760 --> 00:30:34.520
David Rock: right? Not AI having insights. Someone. Just put that in the chat. You know. Today, I yeah, I can have insights. AI won't be energized by those insights, though it won't be motivated by them, although we could teach it to pretend to be motivated by them, and probably will. But you need. The motivation of insights and insights literally make you smarter because the insight gets generalized to any like vague thing. It sticks in the brain like when you have an insight like, Oh, wow! I really do need to.
119
00:30:34.550 --> 00:30:46.050
David Rock: You know, I really do need to care about my team's motivations. I understand them like it makes you smarter. Right? So so AI can actually give us more insights. But also it's not just about insight.
120
00:30:46.450 --> 00:31:03.600
David Rock: It can generally make you smarter by noticing patterns. And we're actually playing with this our AI called Niles. We've started deeply conceptualizing kind of 2 additional modules. It's already live. Now. Niles is coaching, and it's now doing voice, real time, voice coaching. It's crazy, powerful.
121
00:31:03.600 --> 00:31:25.960
David Rock: And it's blowing my mind. So Niles is doing incredible real time coaching now, and we're actually testing it against other coaches, and seeing the speed difference of what speed to insight, and not just any insight but insight that people will say, Oh, I'm definitely going to do something differently. I wouldn't have done before. So we're checking the percentage of conversations that result in an insight. In 5 min and 10 min
122
00:31:25.960 --> 00:31:40.789
David Rock: of Niles. 5 min is probably the more important one, and so we believe Niles is much, much faster to that insight. Moment that useful insight moment in 5 min. But we're going to have some data on that soon. So Niles is working now. But what we're
123
00:31:40.910 --> 00:31:48.940
David Rock: what we're imagining now, and what we're working on next is Niles actually giving you like real time nudges about how to be smarter
124
00:31:49.060 --> 00:32:15.379
David Rock: like, Hey, David, I was, you know, I've been watching your emails this week, and I think you could probably, you know, be more effective if you said, you know, said everything in 20% less words right, and maybe use the word fewer rather than less. It's more accurate, right? Like I think you could like. And would you like to work on that right, and it's got to give you the autonomy. Oh, yes, I would like to work on that great. Let me give you. Would you like to practice this way or this way, and it'll, you know, put you through some practice right.
125
00:32:15.430 --> 00:32:31.380
David Rock: Hey, David? I've been listening to your conversations this week. You gave me permission. I've been noticing you're a little more stressed than normal this week. You know. What stress number do you think you are? Would you like to work on that a little bit. Do you want me to make some suggestions like Niles can literally listen to you
126
00:32:31.410 --> 00:32:51.990
David Rock: and look at your writing. Look at your communications all this and actually notice patterns. It can notice patterns in the quality of your thinking, in the emotions you're experiencing in the threats you're experiencing, the threats other people might be experiencing as a result in the biases you might have, whether you're thinking at the right level, whether you're thinking like creatively enough, and open up.
127
00:32:51.990 --> 00:33:06.089
David Rock: People will need a lot of autonomy. Feel like they're in the driver's seat to interact with it this way. But it's it's coming. So we're working on that. So in fact, it's starting to already do it at the end of coaching conversations. And I was saying, Hey, by the way, in case you're interested.
128
00:33:06.170 --> 00:33:21.460
David Rock: We've noticed this little kind of I've noticed this thing about your you know how you process. Would you be interested in? So so that's really interesting. And I think that's a way to use AI really intelligently. But to make you smarter. And then I think the second side of this we're also working on is
129
00:33:21.700 --> 00:33:38.150
David Rock: is Niles actually ahead of time? So that's real time. Real time nudges, we call them neuro intelligent nudges. The ahead of time. Nudges will be Niles looking at your calendar, noticing what meetings you have coming up calls or meetings whatever, and giving you recommendations based on everything it's seen you do before
130
00:33:38.170 --> 00:34:01.599
David Rock: and based on your scarf profile. So are you focused on more on autonomy or status or certainty and the scarf profile of someone else. So we're working on that now. We're not sure exactly when that'll be ready somewhere between 6 and 12 months. Maybe it's hard to say. But we see a world where Niles will actually give you real time feedback or ahead of time nudges to actually make you smarter.
131
00:34:01.950 --> 00:34:04.319
David Rock: And I think that's a really good use.
132
00:34:04.580 --> 00:34:12.729
David Rock: Remember, humans are fallible. They don't know what they don't know. Humans don't know that they're not thinking at the right level as they should for their job. Right.
133
00:34:12.940 --> 00:34:36.799
David Rock: Lots and lots of leaders just fail to think far enough ahead. They don't allocate enough time to think the time ahead. They should be thinking, if you're a CEO. It should be like 10 years, maybe 20, you know. If you're a middle manager, it should be like 2 years to 5 years. You should be thinking ahead and building structures and plans for that timeline. So as an example, right, Niles will notice, hey, your locus of focus is like this week and this month. But you're in this kind of job.
134
00:34:36.800 --> 00:34:59.810
David Rock: You should be able to be thinking at that level. So I think we can use AI to create better managers and leaders as well as to reduce conflict and improve those interactions. But it's going to take a really interesting, trusting relationship with Niles. So that's something that we're thinking about. And we've had some breakthroughs with Niles. So we're starting to accelerate kind of getting this out. And if folks are interested in learning more.
135
00:34:59.810 --> 00:35:15.179
David Rock: Just put the word Niles in the chat with your company name. Put your company name, and Niles, and someone will reach out to you set up a demo to watch the like, to have a real time conversation with voice with Niles. See what it's doing. It's really doing some crazy things. So I think the purpose
136
00:35:15.999 --> 00:35:25.649
David Rock: the purpose of one purpose of AI is literally not to like offload your thinking to, but to actually help you get smarter.
137
00:35:25.890 --> 00:35:45.040
David Rock: and that's going to take some fancy footwork in terms of getting people's buy-in and getting people willing to sort of play with it. But we think it's there. Someone said to me this week, oh, my boss spends $3,000 a month to give me like a less effective coach that I can only talk to like an hour a week. This is crazy.
138
00:35:45.110 --> 00:35:55.959
David Rock: And so that's we're going there with with Niles. So yeah, Niles and your company name, someone will reach out pretty quickly and get you a demo to play with it. So so that's that's something we're excited about. And I think
139
00:35:56.100 --> 00:36:02.120
David Rock: one big thing. This is particular with our coaching approach in Niles is you have the insights, not Niles solving it
140
00:36:02.140 --> 00:36:27.130
David Rock: super super important, because if if AI is solving the problems for you. You have no motivation, you have no like energy and commitment. You want to be having the insights right. But there's an interesting dance. Niles has to ask you if it's okay to ask you questions and tell you why it's asking you questions. And so it doesn't just feel like, you know it's throwing the problem back to you if you just say, Hey, what do you think
141
00:36:27.130 --> 00:36:49.649
David Rock: people be like? Well, I asked you for help. So it's a whole interesting dance. I kind of wrote a whole book on that called Quiet Leadership, of this sort of interesting dance, of how you bring folks to insights, and we've essentially trained Niles, anyway, enough about Niles. But why don't we get the poll up before we go further? Just to kind of get that out of the way. We keep forgetting that. Thanks, Laurie. Let's get the poll up, and then we can dig in further to the science.
142
00:36:49.840 --> 00:37:00.380
Emma Sarro: Yeah, no. What's coming up for me. I mean, we just had a demo just yesterday on the current state of Niles. It's super powerful. And what's coming up as you're describing it is how
143
00:37:00.380 --> 00:37:25.340
Emma Sarro: you know, using Niles as a collaborator or, as you know, collaborator coach. Right? You're in the driver's seat. You're asking what you want. Niles is providing you with some ideas to follow, to kind of follow that energy. It also can, can, you know, give you an idea of. You know I can sense that you're kind of in this in this mindset right now. And this is the kind of kind of thinking I'll help you get to. But it's not giving you the answer. You're having the
144
00:37:25.340 --> 00:37:49.650
Emma Sarro: insight, and as we always talk about insights are the most powerful thing we have, and so we don't want to give insights up to something else. So we'll we'll become smarter. We'll be driven. And having now is just always there in fluent in the flow of work, which is also something that we talk about all the time. You're not going to use it unless it's unless it's easy to use just having it open all the time, especially for anyone who's working remote, just having that
145
00:37:49.760 --> 00:37:56.010
Emma Sarro: collaborator there, just to get some thinking ideas, not to tell you what to do, but to give you some ideas.
146
00:37:56.010 --> 00:38:01.859
David Rock: It's interesting that the most common usage for AI at the moment is people using it as a personal helper in some way.
147
00:38:02.410 --> 00:38:08.030
David Rock: which is which is really really interesting. Yeah, a couple of comments coming up, some, some fun comments there that I think the
148
00:38:08.180 --> 00:38:25.229
David Rock: a couple people said this about walking. So walking and thinking are very, very synonymous. There is good research that you actually think better while you're walking. And we've been saying for a long time. Do walking meetings like once a day, do a 25 or 50 min walking meeting
149
00:38:25.230 --> 00:38:39.219
David Rock: where you're actually walking while you're having the meeting. You'll think better. You're getting the cardio benefits. You reduce the cortisol, increase the positive chemistry. So a walking meeting is a wonderful thing. And there is good research showing that you actually think better on that walk.
150
00:38:39.230 --> 00:39:01.579
David Rock: It's also any kind of distractor task. That just kind of takes your attention a little bit will quiet your brain. And the quieter brain actually has more insights and can focus more. And all of that. That's why, you know, fidget spinners are a multi 1 million, maybe 1 billion dollar business of, just you know things that kind of take your, take your focus there. So yeah, a couple of interesting comments coming up there.
151
00:39:01.890 --> 00:39:29.899
David Rock: Yeah, Niles is neurointelligent leadership, enhancing system. And we're imagining a universe of like, you know, a Niles for law firms and Niles for utilities and Niles for financial services, firms and Niles, for you know, different industries. And and all this as we were starting to move, really, really fast into this. All right, let's let's dig in for. Oh, one quick thing about the poll executive briefings. We're actually just this week ready to do executive briefings on this topic. We're talking about today, but
152
00:39:29.900 --> 00:39:43.880
David Rock: a little more targeted on basically how you use AI today to amplify thinking. So we so one executive briefing that we have for C-suite is amplifying your actual thinking. But we're also doing executive briefings on.
153
00:39:44.130 --> 00:39:44.620
Emma Sarro: Everything.
154
00:39:44.620 --> 00:39:49.009
David Rock: Leadership principles and all sorts of stuff. So all right, Emma, let's say, let's keep going. Let's dig into the science more.
155
00:39:49.010 --> 00:40:09.650
Emma Sarro: Yeah, well, so as we kind of what? What are the things that we talked about, what we can't let go like, what is like the thinking quality. What kind, what kinds of things go into better thinking? You know, insight, act, action activates action. What other things do we not want to like? What does it mean in practice? And what do we not want to let go.
156
00:40:09.930 --> 00:40:12.529
David Rock: Yeah, that's a good question. So so literally like
157
00:40:12.670 --> 00:40:31.659
David Rock: Niles or any AI, right has to actually know what great thinking feels like. Right? Feels is a funny word, like what great thinking like you actually have to train AI to say, look, this is great thinking. This is not great thinking right and train it on a huge amount of data. So it begs this really interesting question, what do we mean
158
00:40:31.660 --> 00:40:45.800
David Rock: by great thinking itself? And what are the qualities of that. And how do you kind of objectively measure that such an interesting question, isn't it so? It's a bit like, you know, that other thing. I don't know what it is, but I know it when I see it right.
159
00:40:45.810 --> 00:40:50.989
David Rock: It's a it's it's like, you know, and you see it. What? But what really is it so? Firstly.
160
00:40:51.140 --> 00:41:20.060
David Rock: if you're in the writing world, lots of folks have talked about this. It's like it's like a, you know. Great thinking is saying something in as few words as possible, that there's no like unnecessary additional words like you often get in academia. It's just like it's straight to the point. And you know, in writing, if you read some of the great thinkers about writing itself, you know they'll talk about, you know. How do you say something. So it's picturing. You know, you're creating a picture in the mind's eye.
161
00:41:20.060 --> 00:41:27.360
David Rock: People can see what you're saying in as few words as possible. So as you're minimizing cognitive load
162
00:41:27.620 --> 00:41:28.929
David Rock: for the idea.
163
00:41:29.030 --> 00:41:34.540
David Rock: right? So there's like a central idea you're trying to get across. And you're getting across the most efficiently you can.
164
00:41:34.600 --> 00:41:58.649
David Rock: right? And so you've got to teach AI to think about central ideas and valuable central ideas versus less valuable central ideas, all sorts of interesting things. There's some really interesting books. Steven Pinker wrote a really good book on writing and thinking itself. I think it might be called the stuff of thought, but it's a really interesting kind of way to look at it.
165
00:41:58.650 --> 00:42:14.670
David Rock: So it's it's a like an efficiency of communication is is a piece of it, and one outcome of that is people literally, you know, one person talking to another. This person communicates in such a way that this person can see what they're saying accurately.
166
00:42:14.670 --> 00:42:39.259
David Rock: right? So they actually picture something in the visual cortex accurately compared to what this person is trying. So there's a success in that. So there's elements of visual. There's definitely elements of insight like, oh, something, you know, clicking something, making sense. There's elements of being at the right level of of control. Construal is level of abstraction. Right? So, you know, if you're talking about goals a year from now.
167
00:42:39.690 --> 00:43:02.850
David Rock: the context of that should be pretty abstract goals. You shouldn't be talking about lots and lots and lots of details. So then a lot of that's innately human. But we need to teach AI to recognize when you're being too big picture, too detailed. So there's a whole issue on level of thinking, right? So there's that clarity of thinking there's level of thinking.
168
00:43:02.850 --> 00:43:17.850
David Rock: There's impact of thinking as well. Right? You've got clarity level impact. So the impact of thinking like, are you having the emotional response you want? Are you having an accidental emotional response like, you're accidentally creating a fairness reaction right?
169
00:43:17.850 --> 00:43:31.219
David Rock: Trained in scarf. So what's the level of thinking that really? So what's the impact of the thinking. So those are sort of 3 examples. And then, of course, there's like, does the thinking have some bias unintentionally in it?
170
00:43:31.740 --> 00:43:56.719
David Rock: And does it have like, you know, is it like, has it explored all perspectives? It's just one perspective. There's a lot of different things to kind of bring in to really make an AI intelligent. And I think, even with incredible training, incredible training. You're still going to want, you know, really intelligent humans looking at this going, hey? That's not quite right, because you've also got this fact that if everyone's using AI, you've got a race to the middle.
171
00:43:56.780 --> 00:44:19.929
David Rock: if everyone's using the same AI to build different communication tools, different products without the human ingenuity, the human creativity. You're going to see a lot of similar ideas, a lot of similar concepts. So ironically, the more we have these power tools like AI, the more kind of original thought is important, the more that the concept you're trying to get across has to be unique.
172
00:44:20.050 --> 00:44:33.419
David Rock: Right? The idea you're getting across has to be good. How clearly you express it, right? What impact it has, what bias it has what level you think. Those things are somewhat secondary. It actually is becoming more important that the idea that you have is a really really good idea.
173
00:44:34.090 --> 00:44:59.010
Emma Sarro: And you know what's interesting. There's research. We wrote about this a little bit. But research that the creativity that stems from AI is going to be more similar to each other. And so the innovation is still going to have to come from humans. I mean, the AI is still spitting out information that's coming and communicating with the human brain. So what it does to the human brain still has to be just like how we communicate it has to be at the same level of construal
174
00:44:59.010 --> 00:45:16.879
Emma Sarro: to be kind of within our capacity limits. It has to drive action, it has to motivate us. It has to facilitate insight and and be accountable as well. So just like the the best AI will be working within those limits as well to drive what it wants to out of us.
175
00:45:17.010 --> 00:45:27.100
David Rock: Yeah, I want to make a comment, a related comment to Tommy's point here, that there will, unfortunately, and I'm sort of a little disappointed in this, but there will unfortunately be sort of a
176
00:45:27.160 --> 00:45:48.919
David Rock: big big business opportunities for certain of the AI companies to provide very premium services. Right? I think about how successful Bloomberg is right and no offense to Bloomberg. Great Company. I hear he's a good person, but like basically, it's a huge amount of money per month per terminal to have their terminal like in your office, incredible business model. But they basically have the data.
177
00:45:49.210 --> 00:45:55.960
David Rock: right? So and I'm hearing noises like the big AI companies are basically going to charge, you know, huge amount per person per month
178
00:45:56.450 --> 00:46:22.389
David Rock: to have access to incredibly powerful services that average people won't be able to pay for right. And you sort of think, Oh, yeah, of course, that's what's going to happen. That's where it'll go, so there'll be sort of a base level of AI. Everyone will be able to access, and then it will cost more money. It will cost more money for sure, and maybe we'll get around that. Maybe the Chinese will keep innovating and putting stuff out free, and we'll all be able to have incredible AI. But I don't know we'll see. But it can.
179
00:46:22.390 --> 00:46:39.779
David Rock: You know, if there's a big financial difference in the quality of AI you can get, there's going to be a bigger gap in the haves and have nots. What I'll say right now is being really good at using. AI makes you incredibly more effective than someone not using it at all like there's a big performance gap now
180
00:46:39.810 --> 00:46:53.170
David Rock: depends on the role, right? But certainly, like as a coder. If you're a coder, an engineer, huge performance gap and being really really good at using it, you still actually need to review things and manage things and drive it. But you're going way faster
181
00:46:53.200 --> 00:47:13.390
David Rock: than without as a researcher as well. You're going way faster as a scientist right way, faster. So a lot of fields where you are already going way way faster because of these tools. And so I think we're starting to see this stuff is going to get taught in schools. I think it's really important that it gets taught in schools, because essentially this is an adjunct to our prefrontal.
182
00:47:13.480 --> 00:47:16.119
David Rock: This is an adjunct to our limited prefrontal.
183
00:47:16.420 --> 00:47:30.029
David Rock: and provided we remember that the adjunct is, is as fallible as the entire human race put together, and that we are deeply fallible, and that we need, you know, some improving, provided we remember that it can be a very, very good tool in time.
184
00:47:30.660 --> 00:47:54.010
Emma Sarro: Yeah, yeah. And some interesting questions. You know, I know we're almost done. But interesting questions around how AI coaches can can coach soft skills like empathy. And I think Niles has, you know, has been built on being able to do that right, like providing that those basic habits and skills that we know are how you create empathy and show empathy and display empathy to others.
185
00:47:54.340 --> 00:48:18.989
David Rock: Yeah, yeah, no, it's going to be woven in. There's a fun comment in the chat from someone who's like, you know, chat Gpt just keeps giving compliments even to average ideas. And that's like problematic. This is what I mean about the AI actually needs to understand quality thinking, I think, what we're going to see is almost is just an enormous number of ais and agents for particular purposes. I think that's where we're going to go, I think you know, like.
186
00:48:19.329 --> 00:48:38.209
David Rock: you like software, ate your office like software like apps apps. Ate your office right? You don't have a wall clock. You don't have an alarm clock. You don't have a rolodex. You don't have a, you know, phone like there were all these things that you had. You don't even have a physical desk right? So literally apps. Ate your
187
00:48:38.230 --> 00:49:07.260
David Rock: office, and I think AI agents in particular are going to eat your apps. I think you're going to have just all these AI agents that do the things that your apps currently do. And I think it's going to be very, very disruptive in some ways, and these will think for you. But you're going to want, you know, if you're a writer, you're going to want an AI that literally makes your writing better as fast as you can, and there's going to be an arms race to build that. Now, if you're a science writer, you're going to want that kind of AI writer. Right? If you're now, if you're a technical science writer.
188
00:49:07.260 --> 00:49:31.069
David Rock: right in physics, you're going to want that particular like agent right? As opposed to a technical science writer in cosmology right? Similar. But maybe you know, slightly different. Right now, maybe you're a mainstream science writer in biology, right in relation to genes. You're going to want exactly the AI. So there'll be some general ones. But I think we're going to see a proliferation of individual kind of agent tools
189
00:49:31.080 --> 00:49:40.889
David Rock: to help you get smarter and think better. I think that's that's sort of somewhere that we're going to get to, and that's including with kind of coaches like, Hey, I want to coach
190
00:49:41.040 --> 00:50:08.829
David Rock: to help me with my family, or I happen to be Christian. I happen to live in the Midwest. I happen to have these kinds of values? I'm interested in this. Can you give me a you know what's the best AI coach for me in this domain as opposed to you know I'm an underground raver in Berlin. I want an AI coach to help me, you know. Maximize my longevity right? So it's going to be interesting to see how that all unfolds. But ultimately all of it will be about
191
00:50:08.970 --> 00:50:20.360
David Rock: limited, prefrontal, and kind of offloading unnecessary things, so that we can get to the more useful stuff ourselves and manage and monitor these things. So I think I'm starting to see this with big companies.
192
00:50:20.360 --> 00:50:44.200
David Rock: this like, how do you train and manage agents is a coming big domain in organizational learning. How do you train and manage agents? And that's something that we're looking at. What are the skills for training and managing AI agents? Particularly because you're going to have fewer people. But those people need to be really good at training and managing agents themselves. So I think that's another prediction of what's coming.
193
00:50:44.510 --> 00:51:06.300
Emma Sarro: Yeah. And if I mean, if we're using the analogy of, they're just as followable as humans, then, you know, maybe work with those agents just like you would with a teammate or a direct report. I mean, how would you train them to do the things you need them to do? How would you challenge like what they're putting out and their work? How would you improve their work so it can. Maybe, you know, go along those same lines.
194
00:51:06.740 --> 00:51:11.789
David Rock: Yeah, yeah, really important. And how do you? How do you really?
195
00:51:12.490 --> 00:51:15.029
David Rock: you know, make your organization smarter
196
00:51:15.230 --> 00:51:32.800
David Rock: by the interfacing people with AI, and I think, recognizing the fallibility of both humans and AI and kind of having this dynamic where they're improving each other. I think that's really important and take into account context. And I think you know, companies are going to be pretty quickly
197
00:51:32.940 --> 00:51:40.529
David Rock: seeing some of the challenges of of these technologies, and then the real thing is, how do we amplify our people?
198
00:51:40.660 --> 00:52:09.960
David Rock: How do we amplify our people? And but also, how do we build new, you know, fascinating, amazing, useful digital tools that deliver the services and products we have. And I think, yeah, understanding AI is an interface breakthrough as an interface breakthrough for thinking for you the right way and improving your thinking. I think that's the key to this. What a fun conversation we probably should wrap up! I think you heard some announcements, Emma, we should. We should talk Summit. It's coming up.
199
00:52:09.960 --> 00:52:16.603
Emma Sarro: I know, and this is gonna be huge, like front and center. I'm I'm hoping that Niles will join us at Summit. I know we have a
200
00:52:16.850 --> 00:52:41.820
Emma Sarro: I might be a keynote speaker. I know we have a session one of these Fridays on June 6.th We'll be doing a Friday session. We'll bring Niles along, so you'll be able to see just where we are in our evolution of Niles and the conversations we can have there. It'll blow you away, so that'll be on June 6.th I know that summit is is Prep. Has just begun, and it's going to be November
201
00:52:41.820 --> 00:53:05.349
Emma Sarro: 12th and 13.th Our theme is thrive through complexity, and it's going to be both virtual and global. So we'll have sessions in all regions. I know that you can. You can register now, if you want, and so that'll just continue to evolve and we'll have. We'll start announcing our speakers. And so that's that's the exciting news on the front of Summit and the Podcast is coming up shortly. But.
202
00:53:05.350 --> 00:53:34.460
David Rock: Try and hold the date for Summit November 12th and 13.th Even if you can't imagine that it's going to happen. It's virtual, but it's like all day. In fact, it's going to go around the clock because we're doing Apac and Amia versions as well. So try to hold that time. Imagine you're going somewhere. Maybe you can get together with some colleagues so you can get that social pressure of watching together. Do a watch party. And if you're from an organization. You can get organization tickets and pricing and kind of get a group of folks together as well, and a big hack is, get your company to buy membership.
203
00:53:34.460 --> 00:53:41.900
David Rock: get your company to buy corporate membership. So you and a bunch of colleagues can get passes to the summit and access to everything as well.
204
00:53:42.230 --> 00:54:02.930
Emma Sarro: Yeah. Awesome. An AI rave. Maybe. David Weber's comment, yeah. So that's about all we have. I know we have coming up this week. We'll be diving into some special sessions on Gpa. Which is our leadership. 3 pillars of leadership. I know I'll be having the 1st session this week on growth mindset. So we'll be talking that through. And the 3.
205
00:54:02.930 --> 00:54:13.170
David Rock: Put the link in. Maybe my team could put the link in. So growth, mindset, psych safety, accountability, the 3 pillars of leadership. Now that's coming up you're doing. We're off next Friday, right for the long weekend
206
00:54:13.440 --> 00:54:16.539
Emma Sarro: Off next Friday. Take the take the long weekend. Yep.
207
00:54:16.610 --> 00:54:46.560
David Rock: And then the weekend after. I think we're back with leadership like really looking at leadership now the following Friday. And then Niles after that. So we've got some fun sessions coming up. But great, great session! Thanks, Emma, thanks everyone behind the scenes pulling this together. I hope you've enjoyed thinking more about the stuff of thought and considering, like the right way to use AI, it's an incredibly powerful tool. But we ironically need to understand the human brain to really maximize it. So thanks very much. Everyone. Take care of yourselves. Have a great holiday in the States as well. Bye, bye.
208
00:54:47.420 --> 00:55:12.310
Emma Sarro: Yeah. And so I know that we'll drop that those links in for the Gpa. Session coming up for Summit, and you can find all that on our website as well. So if you enjoyed today, I hope you did, you can find all of our other podcasts on their on demand, you can look for your brain at work wherever you enjoy listening to podcasts. And I hope that you all have a wonderful weekend, and I hope that I see you here in 2 weeks.
209
00:55:12.900 --> 00:55:13.870
Emma Sarro: Take care!