Your Brain at Work

Minisode: DE(A)I Part I: Mitigating Bias in Technology Adoption

Episode Summary

Seemingly overnight, Artificial Intelligence has taken workplaces around the world by storm. But while there's tremendous excitement about the potential this rapidly evolving tool might unlock, there's also trepidation — and perhaps there should be. Even as we begin to integrate AI into our everyday routines, questions remain about the underlying biases that may influence technology and its adoption. In this special episode of Your Brain at Work, published to coincide with a presentation — delivered by Janet M. Stovall, our Global Head of DEI, and Matt Summers, our Global Head of Culture and Leadership — at the Society for Human Resource Management's Talent Conference and Expo, they examine the emergence of AI through the lens of Diversity, Equity and Inclusion — this time focusing on breaking bias. *** Learn more about our groundbreaking new leadership development program — LEAD: The Neuroscience of Effective Management — here: https://neuroleadership.com/LEAD Keep up-to-date on the latest NLI content at our blog: https://neuroleadership.com/your-brain-at-work/ Join us for upcoming events: https://neuroleadership.com/our-events/ Host a NeuroLeadership Event in your city: https://hub.neuroleadership.com/events/na-host-a-nli-event-2024 Explore our solutions: https://neuroleadership.com/solutions-for-organizations/

Episode Notes

Seemingly overnight, Artificial Intelligence has taken workplaces around the world by storm. But while there's tremendous excitement about the potential this rapidly evolving tool might unlock, there's also trepidation — and perhaps there should be. Even as we begin to integrate AI into our everyday routines, questions remain about the underlying biases that may influence technology and its adoption.

In this special episode of Your Brain at Work, published to coincide with a presentation — delivered by Janet M. Stovall, our Global Head of DEI, and Matt Summers, our Global Head of Culture and Leadership — at the Society for Human Resource Management's Talent Conference and Expo... they examine the emergence of AI through the lens of Diversity, Equity and Inclusion — this time focusing on breaking bias.

***

Learn more about our groundbreaking new leadership development program — LEAD: The Neuroscience of Effective Management — here: https://neuroleadership.com/LEAD

Keep up-to-date on the latest NLI content at our blog: https://neuroleadership.com/your-brain-at-work/

Join us for upcoming events: https://neuroleadership.com/our-events/

Host a NeuroLeadership Event in your city: https://hub.neuroleadership.com/events/na-host-a-nli-event-2024

Explore our solutions: https://neuroleadership.com/solutions-for-organizations/

Episode Transcription

Janet (00:02.242)

Greetings all. I'm Janet Stovall, Global Head of DEI at the Neural Leadership Institute.

 

Matt Summers (00:02.704)

Thank you.

 

Matt Summers (00:09.303)

And hi there, I'm Matt Summers, the global head of both culture and leadership practice here at the NLI.

 

Janet (00:14.722)

So Matt and I are gonna have a conversation today and we're gonna talk about AI, artificial intelligence. Now it's supposed to be making everything better, right? And to be true, it can do some amazing things, but it can also do some not so pretty amazing things when bias creeps in. And so that's what we're gonna talk about today, bias in AI. Now, Matt, to be honest, when we first started talking about this at NLI,

 

Matt Summers (00:15.494)

Wait, wait!

 

Janet (00:41.378)

I did not know the difference between generative AI, predictive AI, I didn't know. And so you're the person I asked to define this. Why don't you do that again? Why don't you define this for all of us so we can get started?

 

Matt Summers (00:53.147)

Yeah, it's a great place to start. Thanks, Janet. So let's start with the first terminology and definition here around generative AI. So what's most common you'll see in the media and what we're hearing in a lot of publications now is this definition of generative AI referring to this artificial intelligence and these systems of artificial intelligence that can create things like new and original content. That can be things like, so Janet, we've seen this, right? In text.

 

creating images, even now music and even code. You don't even have to learn how to code anymore because this AI technology can do that for you based on the data that it's being trained on. And this is in real stock contrast to the other definition you mentioned earlier here, Janet, and that's predictive AI. And that's really all about forecasting future outcomes based on, I would say, historical data in the system. So a good way to think about this is

 

Generative AI is like an artist inspired by their experiences. While predictive AI is probably like the analyst looking for patterns in past events. Right? And so what that does is that gives us a real comparative of how these different types of technology interact with humans. And as you said, there's both upsides and downsides. Back to you, John.

 

Janet (02:13.991)

Well, and what are some of those upsides? Let's talk about that for a minute. What are some of the good things about AI that you can think of?

 

Matt Summers (02:20.543)

Yeah, I mean, look, the truth is, is this technology. So let's talk about generative models of artificial intelligence, like chat GPT for text or Dali for images, right? What they're doing here, Jonathan, it's fascinating if you like lift up the hood and look behind the scenes of what's going on here. What they're doing is they're devouring vast amounts of data, right? Learning patterns, styles, structures of data. And what this does is it allows that AI to develop an understanding of the world around it.

 

And once it's trained, that model can actually generate new creations by starting with, you know, I kind of think of it as a seed, if you will, right? When we think about the human interaction upfront is that prompt, we give it a prompt to generate or create something, right? And so when we when this technology expands on that prompt, right, and it's learning everything, it's leaning on everything that it's learned to produce something completely new, right? It's kind of like, you know, Janet, I think about this metaphor, I was thinking about a conversation coming up today.

 

It's a little bit like teaching a chef to cook by exposing them to thousands of recipes, right? But if you just give them a few ingredients or prompts in this example, then what's that chef going to do? It's going to whip up a dish or content from the GPT bot in this example. That's both familiar and potentially surprisingly novel in that experience, right?

 

Janet (03:26.163)

Mm-hmm.

 

Matt Summers (03:43.663)

And so it's not just about creating art or maybe writing stories. It's actually about, I think the upside here to answer your question, Janet, is it's about really leveraging that human creativity, helping, you know, in that partnership, machine human complexity. I think it helps humans solve complex problems in unique ways. I think it's about generating some, you know, probably the word is innovative ideas that in the human brain that can lead to new inventions and solutions in that partnership, right?

 

Um, and,

 

Janet (04:13.815)

When we talk about it though, we always think, to me, when we first start talking about this, we think of AI and the way you're describing this here as something that's kind of out there that maybe programmers are doing that. But AI is really in its broadest sense, something that we're all using. Am I correct?

 

Matt Summers (04:30.919)

Absolutely. I mean, look, it's something that we're using every single day. You and I talked about this just last week, Janet, and we're using it, and we don't even realize that we're using AI and we've been doing it for decades. Right. I mean, what are some of the examples that we talked about, Janet, where how are we seeing that show up in our day-to-day use? And we don't even think of it as AI.

 

Janet (04:43.748)

Exactly.

 

Janet (04:49.678)

Well, I mean, like I have a Google Home and I'm always yelling at it. And my daughter keeps telling me, I can't be mean to the Google AI because one day it's gonna be my overlord, but I do use it every day and I talk to it every day. Was it the Alexa dot, whatever that stuff is, that's AI, right? Yeah, okay, so it is.

 

Matt Summers (05:08.607)

Yeah, yeah, yeah. And it is, and you're right, Janet, it's everywhere. And I think what we're seeing now is this evolution of that AI that we've been using for more than a decade or two. Now, the way that we're looking at AI and the way that it's been positioned in the market and in the partnership with humans, right? You and I kind of discussed this at length. It's actually different now, right? I think it's opening the door to a whole new world where AI is not just a tool anymore, but it's more of a collaborator.

 

It's helping us enhance some of the capabilities and kind of pushing some of those boundaries in ways that we haven't really seen before in our interaction with technology.

 

Janet (05:48.354)

And what the interesting thing is, yes, on the upside, the flip side is if AI is something that we humans create and use and contribute to, then we're gonna contribute the things about us that are human, good and bad. And so that's what, you know, the issue is about bias in AI. And, you know, I wanted, I did a little experiment one day where I took, took chat GPT and said, write me a post

 

as chief diversity officer about the backlash in DEI right now. And it gave me a little post. And then I turned around and said, OK, now write me. And it was a good post. It was nice. It was above board. It was good. Then I said, OK, now tell me. Now write it as if I were an African-American woman CDO. It came back and, oh, goodness, we had aunts. We had y'alls. We had to call on the ancestors.

 

And I wanted to say immediately that it was racist, but I couldn't because what I know about AI is that it itself is not inherently anything. It is about what goes into it. It's biased because we are biased. It's biased because our world is biased. And in LI, we talk about bias all the time. We talk very much about cognitive bias. And it's sort of those evolutionary shortcuts that the brain takes.

 

save energy, make fast decisions, simplify complex world. But what you find a lot in AI is implicit bias. It's what happens when you take that cognitive bias and you introduce it to experiences, cultural influences, and you expose it to societal messages. Well, the AI is being trained on that data in those same biases. You know, when we did a, when we were teaching sort of how this was, we're teaching this to NLI, I'd used the term BIBO.

 

Bias in, bias out. And for me, that helps explain that AI learns from the world around us. And so it's inevitably going to inherit the biases that exist in our society, in our historical data. And so Matt, let me go back to what you were talking about a minute ago about predictive and generative. And like you said, it's been around for a while and I'm gonna throw a little history in here. Back in the day, like the 50s and 60s, that's when AI really kind of started happening.

 

Janet (08:14.842)

It was the baby steps. Scientists were trying to figure out the basics, but bias wasn't a big issue yet because we didn't have computers at the level that we do now. 70s to 2000s, that's when predictive AI started acting up because it started making decisions like who gets a loan, what a criminal looks like, boom, there's the bias. So if you have, for example, in predictive AI, if an area, for example, was heavily policed, data from that time,

 

would be trained into the AI to think the area might be a high crime zone, even if it was unfairly policed and even if that's changed. So it started making decisions using the data it learned from the bias data. Here we are now, generative AI. So now AI, like you said, is making art, it's writing stories, it's helping companies decide who to interview, promote, pay, more or less. All of that is with the data that it learned from those old bias models built using data from predictive systems.

 

So you have this chain reaction, bias in the world, bias in predictive AI, and now it's even worse in generative AI. So what does that mean? What does it mean at this point? And since we're at the beginning of this, where we are right now, what is it that we need to do to make sure that this doesn't continue?

 

Matt Summers (09:33.595)

Yeah, Janice, I think this is a question that every organization and probably most users of this technology are wrestling with right now in real time. I think realizing that the bias doesn't just come from the human, as you mentioned, it's from the data set and the technology and those humans that created the algorithms, right? That are built in that, that unintended unconscious bias, but also the historical bias, right?

 

We talk about that historical bias in human decision making here at NLI. And that's no different to how I think this technology is providing responses to humans who put in prompts. And so when I think about how we're using this technology in multitudes of ways, like if you go back, you know, just even in the last five to 10 years, what we know is that this technology across different industries has been...

 

really helpful. So an interesting McKinsey report recently said that companies employing AI technologies, which are using generative models like the ones we're talking about here, have actually reported up to 20% improvement in business outcomes compared to those companies that are not embracing it, right? However, what they're also reporting is that the accuracy of information that's being provided upfront needs to be mitigated by the human. When processing and taking into

 

the information or the data that AI technology is spewing out to the human or the user. Right. And so, Janet, I mean, it's I'm smiling because it's a conundrum. Right. I mean, there's two sides to this coin. Last year, you know, we argued with our clients that last year, this technology was very dangerous because the patterns or the habits of humans were, wow, this exciting new technology like chat GPT or Gemini or Bard, I think in the day was a copy and

 

Janet (11:10.148)

Right.

 

Matt Summers (11:27.283)

I would ask it a question, try and solve for something. And what would happen here is that actually my creativity would go down. Because all I'm asking for is a response. I would copy and paste that into the work, and I would look and feel smart. The problem with that is that it's mired in bias. We haven't even had a mitigation filter for that data we've now copied and pasted. So this is really interesting from a brain perspective, too, Janet, in that we are depending now on the data that we have.

 

Janet (11:33.92)

Mm.

 

Matt Summers (11:54.947)

our automatic processes, right, which is where our bias sits, right, our automation of thinking and decision making because we live in such a fast paced world. Now chat.gbt is even is accelerating that even more. Would you would you would you probably say that's fair to say?

 

Janet (12:09.122)

Oh, absolutely. I mean, like I said, the example I used where I talked about how that happened for me, but, and I have a cat in my, and for those of you who can see me, I have a cat in my background. I think that's very welcome, yes, but he needs to go away. But anyway, absolutely. And we use it every day. I think where it frightens me a little bit is, we talk about how, if we don't look at, if we assume that the output we get is correct because we haven't mitigated bias,

 

Matt Summers (12:18.906)

And the cat's very welcome. We're happy to have the cat.

 

Anyway.

 

Janet (12:38.134)

the way you just mentioned, we go forward and we just exacerbate the situation. You know, when you think about how this plays out in lives in ways you don't think about it, it can play out in how medical decisions are made. It can play out, certainly I mentioned earlier in the HR space. Here's an example. And it goes back to what we were saying about the predictive to the generative. If you think about performance reviews, and let's say,

 

before we started doing DEI in the workplace, performance reviews, and things were not as diverse. So past performance reviews might contain some unfair assessment of women and people of color. That's just the way it was. Predictive AI then would take that to identify high potential employees, and it learned that bias. And now they took that bias into generative AI. So now what happens is, from a visual standpoint,

 

the headshots you might see on them and the images you might see on the website might be biased because AI, generative AI pull that from predictive AI. You might have job descriptions that use language that tells other people we don't really want you here. And the result is you end up with qualified people that you don't ever get to because of bias, not something that you as an individual or an individual hiring manager might wanna do. It's the bias in the system. You never even, those people never even get in there. And

 

depending on what it generates for your organization, how the world views the organization itself. That's what happens if we don't mitigate that bias upfront. And we say that though, a lot of people look at me and say, mitigate the bias, I'm not writing code, I'm not choosing system, I don't really have a role in this. And I would argue that you do. I mean, we see it happening within NLI, what we as individuals do, I'm not writing programs either, but there are things

 

that I do to help mitigate the bias. And there are things that you definitely do. So maybe we can talk a little bit and give people an idea about what is it that you as an individual can do to mitigate bias. And I'll start, because I can say what I did. We were building our tools in LI. The team, I've got put on that team. I was part of a team. Normally you wouldn't have the diversity person, the CDO on part of a team, but it made sense because then when we would...

 

Janet (15:03.118)

taking prompts and trying things. I was actually looking at it and saying, well, that's not quite right. Let's change this nuance. And that's the tool we were building. Even if you're not building the tools, I think everybody has a role in this. And I think it's what you were talking before about putting the human back into it. What do you think about that?

 

Matt Summers (15:20.979)

Yeah, Janet, I think that's absolutely right. And certainly our practice now, it's a new practice that we're experimenting with, and we've implemented here at NLI in our own organization, right? And I think if we take that science-based lens where we place the human at the center and we wrap the AI around the human, rather than the other way around, I think that's how we win this kind of this conundrum, right? How we solve for it. And so, you know, if I can go to the neurobiology for a moment, if you don't mind.

 

Janet (15:48.622)

Please do, please do.

 

Matt Summers (15:50.363)

Janet, just for a visual for our listeners, there's two parts in the human brain that are processing depending on how we interact with the AI. So there's a very divergent process that we should always start with when we interact with the AI, which can help us to mitigate the bias upfront. That divergent process is getting skilled at writing quality prompts.

 

that create an inclusive response, that create an equitable response from the technology. So our own language in the prompting gen, it actually frames the response from the technology. And so what I'll often do is I'll say, act as an expert in X, right? And I want you to generate a thousand word, three paragraph with four options as I try to solve for something.

 

And I want you to create an equitable response, right? Considering all parties involved. And when we use that language, it really challenges the technology to think outside of its just standard responses and that bias. So we can start there. But when we ask you for multiple options, that's a real expansive thinking partnership with the technology, right? Where we can look at different alternatives. So we're using a different part of the brain when we're doing that. There's that inspires creativity that...

 

look, allows our brain to think about alternative options rather than the one we may have a bias in ourselves. Right. Then what we do is we step away from the technology and we look at it just from humans and we can interact with others and bring in groups, right, to help mitigate and look at the options that the technology has provided. Janet, I've done this probably on three or four different projects just in the last four weeks where, you know, maybe I've been stuck in my creativity or my problem solving.

 

Janet (17:13.623)

Mm-hmm.

 

Janet (17:28.834)

Wow.

 

Matt Summers (17:35.979)

And by having the technology provide five or six options, and then I take it back to my project team and say, here's where my thinking is at, and I've used chat GPT, so I declare it openly, right? And have we considered these things, and are they the right things to consider, right? And so we have a real, I would say, objective conversation about it, and we also ask ourself, what is the potential bias influencing our thinking here, our decision making?

 

Janet (18:01.122)

No, that's brilliant. That's brilliant because one of the things as a communicator in this space, one of the things I always say is, it went in doubt, ask somebody. Even when you're not in doubt, ask somebody. We always talk at NLI about the value of diverse teams, how they're smarter and they get more done. So the point, the example you just gave, the way you're doing it, that sounds to me like the easiest thing anybody can do.

 

you know, take it to somebody else, because people are going to see things that you don't see. And I got to tell you, your prompt game is on. I need to learn how to do prompts the way you do them. So you can give us all a lesson in that. Mine aren't quite that deep, but I'm going to learn. I'm going to get better at them now, thanks to you. So I think people are interested in this. They want to know more. Where can people find out more? Where might they learn more about how to do this, including how to write really great?

 

prompts me.

 

Matt Summers (18:59.047)

This is a good question. There's a lot out there, Janet, you know, to be honest. And I think it's just folks don't know where to look, right? And here at the New Leadership Institute, I know we've been doing a lot of research and a lot of translation of that research and data into best practice, best skills, maybe best habit adoption, you know, for listeners and for learners and clients. So, you know, NLI, we have, of course, our blog. We've written a number of recent blogs just in the last two months.

 

in the space of NLI, research as it pertains to science-based best practices when interacting with artificial intelligence. You know, we have guides too, right? So those that are corporate members can access to some of our materials on one-page guides on how to really mitigate the bias as they interact with AI. And of course, you and I, I'm really excited, as I know we both are. Next week we'll be in Vegas at the...

 

the Vegas Shurm event talking about DE and AI and the neuroscience behind that and some of the best practice there. And we'll be sharing some of that with our audience, right, Janet?

 

Janet (20:04.714)

Exactly. And, you know, like I said, when I said, when in doubt, ask somebody, ask us, you know, we're paying attention to this right now. I'm happy to, I'm happy to answer those questions. But as we wrap up now, first of all, thanks for this conversation. And we are, we, it's a conversation that we're going to continue. You'll hear more from us talking about this together. Like Matt mentioned, we'll be at the Society of Human Resource Managers, SHRM, and we're going to talk about it there. But we're going to do some more on this podcast too. So, and we'll be doing webinars. So just, you know.

 

Talk to us, we're willing to talk about it. And the biggest thing I'd like to leave with is to say that stay engaged. We are still trying to figure this out. We're trying to figure out how to use it fairly. We're trying to figure out what the implications of AI are. And that means we have to be attentive. We have to pay attention. The good news is we're still at the beginning of this. So this is the time we are best able to figure out how to do it right so that we don't let AI do.

 

more damage, but instead build something that can do a whole lot of good. Looking forward to talking to everybody about this soon.