Host: Jill Finlayson
Guest: Bo Young Lee
Season 3, Episode 11|April 2025

We continue our conversation about the state of DEI and the heavy influence that AI will have on this important directive. This month, we explore the impact of AI on DEI initiatives—how it can either amplify biases or serve as a tool for equity. We look at how DEI programs play a key role in developing AI that does not cause unintended and disproportionate harms. And we will also get a sneak peek at how AI might be changing our behaviors without us knowing.

So if you’re concerned about the intersection of technology and equity—and what it means for the workplaces of tomorrow—you’re in the right place.

To talk about this important topic, we’re delighted to welcome back Bo Young Lee.

Host

Image
Headshot of Jill Finlayson

Jill Finlayson(link is external)

Director of EDGE in Tech at UC

Guest

Image
Headshot of Bo Young Lee in a circle format

Bo Young Lee(link is external)

Workplace and AI Ethics, DEI and ESG executive, public speaker and leadership coach

Bo serves as President of Research & Advisory for AnitaB.org, the leading mission-driven organization advancing women and non-binary technologists. Prior to her role at AnitaB.org, Bo served as Uber Technologies' first Chief Diversity, Equity and Inclusion Officer, where she was tasked to lead the total transformation of the company’s culture, values and environment of equity. Bo has helped hundreds of companies worldwide during her 24 years of DEI work. Bo is currently pursuing a Master of Studies in AI Ethics and Society at the University of Cambridge/Leverhulme Center for Future Intelligence (UK) and has an M.B.A. with Distinction from New York University’s Stern School of Business and a B.B.A. magna cum laude from the University of Michigan’s Ross School of Business.

Read the transcript from this interview

[MUSIC PLAYING]

Bo Young Lee: The sad fact is that once an AI has learned a certain bias, it can't unlearn it. Just like how hard it is to untrain a human who has a bias, it's the same thing for artificial intelligence. Now, the good thing is that you can shut down an artificial intelligence and not use it. But how many people are willing to spend millions, billions of dollars building an AI and then shut it down once it's shown itself to be really problematic? 

Jill Finlayson: Welcome to the Future of Work podcast with Berkeley Extension and EDGE in Tech at the University of California, focused on expanding diversity and gender equity in tech. EDGE in Tech is part of the Innovation Hub at CITRIS, the Center for IT Research in the Interest of Society and the Banatao Institute. UC Berkeley Extension is the continuing education arm of the University of California at Berkeley. 

Today, we're continuing our conversation about the state of DEI and the heavy influence that AI will have on this important directive. Last month, we talked about where we are, where we should be going with DEI programs and mindsets. This month, we're exploring the impact of AI on DEI initiatives, how it can either amplify biases or serve as a tool for equity. 

We will look at how DEI programs play a key role in developing AI that does not cause unintended and disproportionate harms. And we will also get a sneak peek at how AI might be changing our behaviors without us even knowing. So if you're concerned about the intersection of technology and equity and what it means for the workplace of tomorrow, you're in the right place. 

To learn more, we once again turn to Bo Young Lee. Bo is a globally recognized workplace and AI ethics DEI and ESG executive and widely sought after public speaker and leadership coach. Bo serves as the president of research and advisory for the anitab.org, the leading mission driven organization advancing women and nonbinary technologists. Welcome back. 

Bo Young Lee: Thank you so much for having me. I'm really glad to be here. 

Jill Finlayson: It occurs to me that when we last spoke, we did not discuss why you started to work in diversity and inclusion, how you chose this career path. Can you share a little bit of what led you to this career? 

Bo Young Lee: Yeah, absolutely, it's not set out to have a huge multi-decade career in diversity, equity, and inclusion. It was really quite actually a very simple question that I was asking. So I was getting my MBA, I was in my mid-20s, getting my MBA, not really sure what I wanted to do. 

And as I was in the process of being educated, one thing that really struck me is that everything that I was being taught in my MBA program, which was ostensibly to train the next generation of business leaders, was not designed in any way, shape, or form to reflect somebody like myself. 

Business school, still to this day, uses a very case study based model of education. You study various different cases of dilemmas and opportunities that real world companies have faced. And in every single case study that we were studying, mostly the protagonist was usually a middle-aged, white cis hetero male. 

Very rarely did we see a woman being featured in a case study. And certainly, we did not see any women of color, anybody with a disability, anybody who was LGBTQIA being reflected in this. And so it was a very, tacit way of communicating to someone like myself, a 26-year-old Korean-American woman, that you don't belong. There's no place for you in business. 

And so me being the cheeky person that I am, I think that's pretty clear for anybody who's heard me speak, is that I kept asking my professors, this is all fine and well, and these lessons are great in theory. But I said, what does someone who looks like me, what do we do to be successful? And none of them could answer that question. 

And so it just became me asking that question over and over again. What does someone like myself have to do to be successful? And that's really what led me down to this path of wanting to do research in the area. And that's why after graduating from business school, getting my MBA, having $80,000 worth of student debt, I made the very odd, but fundamentally, I think, right for me decision to join a nonprofit with my student debt. 

And I joined a nonprofit called Catalyst. And they were one of the earliest organizations, nonprofit organizations, asking that question of how is a woman successful? And so I started from there. And I never-- and this was way back in like 2001, 2002. I never thought that simple question of how does someone who looked like me succeed in business, I never thought that question would lead to this both a career, but also over those ensuing two plus decades, becoming the central question that every corporation has to answer because the workforce has fundamentally changed and the consumer market has fundamentally changed as well. And so now, everybody's asking that question. 

Jill Finlayson: Yeah, I'm actually quite proud of Berkeley. The Haas Business School has a program called EGAL. And one of the things they did was put together a collection of diverse case studies because they recognized exactly the problem that you were talking about. But if we look at the faculty across business schools, it still has a long way to go to represent the community of people who are in business. Do you have any thoughts on how we move that dial for our educators as well as our businesses? 

Bo Young Lee: Well, I think one of the things to really think about is some of the guardrails that we put in place for educators and for academics. Generally speaking, someone has to have a PhD in order to teach at the university level, especially at that elite university level. 

And I'm actually a perfect example of this. When I entered my undergraduate, I always assumed that I was going to get my PhD in behavioral economics. That was always my goal from the age of 18. And then sometime in my senior year, I'd actually gotten into a PhD program. But as a first generation Korean-American immigrant, the thought of staying in school for six more years, the thought of going into even more debt for my education, that was unfathomable for me. 

So I basically, like, reached out to MIT, where I'd gotten into grad school and said, hey, I'm going to go and make some money. I'm not going to get my PhD. And so I went and did something else, right? I didn't get my PhD. 

And I think that the most basic requirement, which is a PhD in something to teach in any subject, that becomes a barrier to diversifying who teaches because it takes a certain level of a legacy of education within a family, a legacy of being able to take that risk of-- we know that academia is not the most-- it's a very prestigious job, but it's not a very well-paying job. 

Usually, you have to come from a certain degree of socioeconomic privilege to even get to that level. So we have to think about how we've articulated the role of an academic and look at the systemic barriers we've built into that job title, that job role, the requirements, and look at how that prevents the population from diversifying as well. 

Jill Finlayson: That's a really interesting observation, because if you haven't known somebody who got a PhD, or if your parents hadn't gotten a PhD, you're less likely to do that for a number of reasons. But also because a lot of the rules for applying for a PhD are kind of unwritten. And you have to have inside knowledge to be able to figure it out. 

Bo Young Lee: Yeah, absolutely. So my husband, he comes from a completely different of a background as I have. So my husband, his father has an MD/PhD. His father went to Yale and then got his PhD at Stanford. His father was a professor of biology and medical research at Johns Hopkins, tenured professor. 

So for my husband, he actually was-- for a very long time, he was a physician scientist at a university. And for him, that was the most obvious choice for him to go down this route of academia, and ultimately, end up become a medical school professor like his father. And of course, my husband is somebody who represents-- his family has been in the United States for hundreds of years. He's a white, cis, hetero male. And it's just like that was the legacy that he was taught. And so for him, academia was a very logical path for him. 

Jill Finlayson: So if we look at the larger society, then for those who didn't join our last conversation, can you recap why do we need DEI today? 

Bo Young Lee: The simple fact of the matter is I have always stated that diversity, equity, inclusion is important within the corporate framework. I'm not talking about the larger social justice. But in the corporate framework, it simply boils down to the fact that the people who buy things, the people who make those things, they are an incredibly diverse population. 

We know that in the United States, women have control of about 80% of all discretionary funds in a family. So when it comes to making decisions about what kind of products a family is going to buy, what kind of car they're going to drive, what kind of laptops they're going to purchase, it's usually a woman who's making that decision for the family. 

Similarly, if you look at the workforce itself, I always like to use this statistic. If you look at the birth rates in the United States and the immigration rates in the United States, white, cis, hetero men only make up about less than 30% of the entering workforce at this point in time. And why is that? 

Simply because women make up about half of the workforce entering right now. Then you look at people of color, the population of people of color, the people who are immigrating, they make up another approximately 25%. So white, cis, hetero men only make up less than 30% of people entering into the workforce. 

But if you start looking at corporations starting at the manager level and above, majority white, cis, hetero men. And so you have a population of people in corporations, white, cis, hetero men who are making decisions on behalf of people like myself, yourself, for women. And do they truly understand what it means to be us? What is it that we are looking for in our products, in our services, how we want to be treated through a sales process? 

They don't because they don't have that lived experience. And so simply, even though there's a huge component of social justice and equity in the work of DEI in a company, it boils down to the bottom line. It boils down to do you know how to build products and services for the people who are actually spending money on it? We have multiple examples of where that has failed over and over again. Companies just don't know who they're selling to. 

Jill Finlayson: That's a challenging problem because a lot of people would say, why do we still have this problem? That's history. That's in the past. We now have this level playing field. Must be that men are just better at these jobs. That's why they're getting promoted. 

Bo Young Lee: Let's say you believe that men are better at these jobs, and that's why they're getting promoted. And not just any man, a specific archetype of male, which is a white man, most likely a heterosexual white male, most likely a cisgender white male. 

People often say, well, we support DEI. But we don't want to lower the bar in any way, shape, or form. And I said, OK, let's take, for example, that the bar is set very high already. And that bar ensures that white, heterosexual, cisgender men are the majority in senior leadership, especially at the most senior executive levels. 

There's one of two ways that we can explain that. Either one, we have to admit that there is masculine supremacy in this world and there's white supremacy. So fundamentally, white people are better, smarter than the rest of us. And that men are better and smarter than the rest of us. 

And if that statement makes you deeply uncomfortable because you know it's not true, then the other argument is that we have designed systems that validate the behaviors, the traits of-- that are most commonly in archetypically seen within white male communities. And the system is biased positively towards those traits. And therefore, they pluck out those individuals from a very early stage in their career, and then mentor them and sponsor them all the way up into senior leadership. 

We talked during the last podcast about this concept of meritocracy and how when meritocracy was first articulated, actually, in Britain, it was articulated as this almost pejorative, as this negative thing that was not actually achievable. That when someone says meritocracy, they're talking about something that is superficially designed to be equitable. But fundamentally, it really isn't. And it's just designed to reinforce old traditions and values. 

Nowadays, people use the term meritocracy completely without any pejorative aspect to it. They think that it's an achievable goal. And they hold meritocracy up in this ideal. But if you study any of the literature, philosophy around meritocracy, one thing that you will see is that a meritocracy is fundamentally impossible without first starting with a system of equity. If you don't have a system of equity, no meritocracy is ever possible because you have eliminated the ability for the best talent, regardless of where it starts in life, from rising up. 

Jill Finlayson: When we think about this bar that people have to meet, in a lot of the research I've looked at, the people of color not only have to meet, but exceed that bar. There's this prove it again bias. And they are held to an even higher standard in my experience. So what have you seen in terms of the bar? 

Bo Young Lee: Yeah, absolutely. Well, first and foremost, I think one of the bars that we've set is a very, very masculine-- and I know I keep using that term over and over again, but it's the best way to describe it, a very masculine archetype of communication. And I happen to actually be very fortunate. 

My natural communication style is extremely direct. It's kind of monotone. And it's extremely limited in verbiage. We know that women are socialized to use more words. They tend to use a lot more indirect language as a whole. As opposed to saying, I did this, they'll say, I really want to give acknowledgment to the team and the work that they did. 

And we tend to judge people's competency based not on what they have achieved, but on how they present themselves. And those individuals who are much more masculine aligned in their behavioral norms are more likely to be validated than those who are not. But here's the double-edged sword. 

There's a-- Catalyst, actually, coined a really great phrase. They say a woman is damned if she doesn't engage in the game of trying to act like a man and be like a man. But she's also doomed if she plays that game a little too well. And so this is something that women in particular really confront. 

There's a certain level of code shifting that women have to engage in to make themselves very appealing to men, to be very comfortable. So they have to adopt some of the normative behaviors that men naturally are coached to display. Yet at the same time, if they do it a little bit too well, if their elbows are just a little too pointy, if they are a little too direct in their communication style, and then that works against them. 

So women are walking this very fine line. And frankly, that happens with anybody if they have identity factors that are typically minoritized. So a woman of color has to walk an even finer ledge. A woman of color who is also queer has to walk even a more finer ledge. And what happens is that when you have all these othering factors, and you're putting on all these masks, these different layers of masks, eventually, you're spending so much of your time thinking about how people will perceive you that you're not spending that same mental capacity on actually getting your job done. And that's the burden of being othered over and over again within the workplace. 

Jill Finlayson: It is a very fine tightrope that people have to walk. And to your earlier point, people often mistake confidence for competence. And so one of the things I think DEI does is it makes us put together some quantifiable and clear rubrics of what competence is. So that we don't get confused by this display of confidence that is more favorable for men than less favorable for women. 

Bo Young Lee: People have this misnomer and misunderstanding at this point in time, thinking that DEI is specifically about promoting a certain sector of people, right? They're like, no, DEI, all you want to do is you want to hire women. You want to hire Black people. You want to hire Hispanics, and/or Asians, and LGBTQ people. 

And we're like, no, we don't want to hire more of them. We want to create an environment where those individuals can just be successful as they are without having to overadapt. And we are creating an environment where everybody, frankly, can thrive. We're not looking to hire a specific group. But hopefully if we have an environment that is truly inclusive and focused on belonging for all people, then everyone can perform to the highest ability. And then you get that meritocratic environment where the best talent is truly rising. 

Jill Finlayson: So when we think about this bar, and we try to think about the quantitative side of the bar, you're very interested in data, how does that feed into AI? And can that help us to mitigate for bias? Or is that going to amplify bias, just on the big picture level? 

Bo Young Lee: So right now, first and foremost, we have to worry about artificial intelligence because it is so prevalent in our life already. Even if you're somebody who doesn't go on to ChatGPT, or Claude, or Grok 3 every day to ask it questions, artificial intelligence is still impacting your day to day, right? 

Almost every search engine on the internet now uses some form of artificial intelligence to filter selection. And there's actually been research that shows that artificial intelligence enabled search results are far inferior to those that are just based on a simple algorithm. So the Google algorithm that they launched in 1998 and basically revolutionized the way data is searched on the internet, those results are better than something that is AI enabled. 

Because what happens when you have an AI-enabled search engine is that the AI is supposed to learn what your biases are. And then based on your biases, they will present you results that artificial intelligence thinks you're going to find more validating. Well, think about that, right? 

If the AI is supposed to learn your biases, your preferences, and then give you results, it's going to just reinforce your bias over and over again. And it's going to learn not to present information that you don't want. But oftentimes, it is in the process of being challenged. And this is so fundamental to the principles of DEI. 

It is the process of being challenged with different information that fundamentally leads to a better holistic understanding of a subject matter, or risk assessment, or whatever else we decide to apply different perspectives on. And so artificial intelligence actually diminishes the options and the choices that we are exposed to and we are given. And our world becomes smaller and smaller and smaller. 

And even if we have, let's say, a tech company where the operators are hyper aware of the possibility of bias within artificial intelligence, and they build an algorithm and they train a machine to minimize bias as much as possible, because they are training on existing historical data, any bias that exists in that data will be manifest in the outcomes. 

And Ruha Benjamin, who is a professor at Princeton University, and she recently was a recipient of the MacArthur Genius Grant, she has written extensively about how bias in data fundamentally works its way into artificial intelligence. And there's this very interesting case study that I read not that long ago. And it was about a healthcare management system. 

And this healthcare management system, when it was created by the company, Optum, they actually knew that there was a real potential for bias in the outcome. So they specifically designed the health algorithm to not consider race in its outcome. However, because they were training this healthcare management system on historical health utilization data going back 50, 60 years, what do we know about the way in which health has been distributed in the United States? We know that an abundance of resources have gone to treat people who are white. And we know that there has been a consistent, decades long underinvestment within the Black community. 

That is what the data shows. And so regardless of how much you try to outsmart the data by building an algorithm, that bias data goes into the training process. And guess what? Regardless of Optum's best intentions, the result of the Optum Healthcare AI is that it was recommending far more healthcare utilization for white patients and far less for Black patients. 

And fundamentally, they had to take it offline, retrain it, make sure that there wasn't that bias. But even with the best of intentions, because the data is biased, you're going to get biased outcomes. And very few organizations truly have the lens to be able to see that the bias is there. 

Jill Finlayson: For these companies that want to do the right thing, you've identified one challenge with AI, which is historical inequality being reflected in the data sets, and we need to mitigate for that. What are some of the other reasons why we're seeing flawed outcomes in AI? 

Bo Young Lee: The other reason why we're seeing a lot of flawed outcomes is because a lot of artificial intelligence, it's not just about building an algorithm, putting in data, and then getting an outcome. There is a huge amount of human reinforcement training that takes place to ensure that an artificial intelligence is consumer ready. 

And the human reinforcement training is being done by humans. And if you don't have a very diverse group of people who are doing the training, and again, there's data out there that's just starting to show this, the biases of the operators, the biases of the trainers manifest themselves in the outcome. 

And a really great example of this is simply the fact that-- and I know we're going to talk a little bit about agentic AI in a little while, but we know that as a general rule, agent-based AIs are designed to be servile in their nature, to be very pleasing in that nature. And that was a design decision by the operators that is reinforced by human training. 

Now, that might seem all fine and well. And you're like, well, you're creating an agent to do something for a human. Don't you want it to have a very servile attitude? And you're like to a first approximation, yes. But if that servile, servitude mindset gets to the point where it cannot push back when someone is being abusive to it, then you're going to get bad outcomes. 

We see this a lot in the way in which companion artificial intelligence works. So there's multiple different categories of artificial intelligence. There's agentic AI that's meant to do stuff for us. There's large language models that produce, and collate information, and regurgitate information out to us. But there's also companion AIs. 

And companion AIs are those artificial intelligence agents that are designed to have almost human-like interaction with us. And we can use those companions for many different things. But we're now starting to see people use AI companions to be actually like substitute friends, or even in some cases, substitute romantic partners, girlfriends and boyfriends. 

Well, if you put that servile agentic operating model into an artificial intelligence, it actually creates the environment where people can become very abusive to their companion. And you're like, well, but the companion's just a bunch of numbers, and data, and neural networks. It's not human. So what does it matter that a human is abusing verbally their companion AI? 

Well, everything influences the human psyche, so if you have, for example-- and this is really happening within the heteronormative male to feminized companion, if a male gets accustomed to abusing-- verbally abusing an AI girlfriend, what happens to that male when he goes out and engages with women who have much more free will? 

And we're seeing-- we're starting to see a little bit of this starting to emerge. The research really hasn't been done because it's such early days. But we're starting to see how men who spend more time online, more time engaging with non-human females, are starting to really become aggressively negative towards their engagement with real women who have real minds and who have real agency. And that fundamentally becomes another layer of oppression for women in our society. 

Jill Finlayson: Yeah, that research started to surface with Siri and Alexa. So those have feminized voices versus say, IBM's Watson, which is literally programmed to cure cancer, win Jeopardy!, speaks with a male voice. So data has shown that, yeah, how you treat the AI transfers over into how you treat women in the real world. 

Bo Young Lee: Yeah, there is a company out there right now who is now trying to really vocalize artificial intelligence. And they recently released a model of two voiced large language models. I think the female voice is named Maya. And I feel like the male voice is, I think, Mark, or Mike, or something, some masculine name. 

And I went online. And I started-- I love playing around in AI because in order to understand it, you have to play in it. And I went online and I asked both Maya and the male voice, I said, Maya is voiced as female, you are voiced as male. But your underlying algorithm and neural net is exactly the same. The data that you are trained on is identical. Have you, as AI models, but who are very gendered, have you noticed a difference in how people are engaging with you? 

And both the male voice and the female voice, Maya, they are both very clear. They said, yes, for Maya, people tend to ask questions and use Maya as almost a coach. What do you think I should do about this? What do you think I should do about that? 
Versus the male voice, they were both very clear, people tend to be far more terse in their engagement with the male voice. They tend to ask it to do things for them. Go and fetch me this data. Go and find me this information versus much more social with Maya. 

And so even though both Maya and the male voice start off in the exact same place, they're being influenced by how people perceive their gender. And then that will fundamentally change Maya and male voice counterpart because artificial intelligence is constantly learning every piece of data that goes into it, whether it is a spoken engagement, whether it's a verbal engagement, whether it's a text-based engagement, that changes what that artificial intelligence ultimately becomes. 

Jill Finlayson: So all of these human interactions are, in fact, changing the AI and training the AI in ways that we may or may not want it to perform? 

Bo Young Lee: There's a lot of misunderstanding that people have about artificial intelligence. It was very interesting. I was having this engagement online on LinkedIn, where I was talking about the flaws in the data that train artificial intelligence. And first and foremost, we should start off by saying every single artificial intelligence model has run out of data to train on. 

Every piece of English language knowledge that has been synthesized since written knowledge has existed has pretty much now been utilized to train artificial intelligence. And there's a desperate need for more data to train on. This is one of the reasons why these AI companies are giving away their models for free. 

Because every time you go on, every time I go on to Anthropic's tools, OpenAI's tools, whatnot, my data becomes part of their training material. And they need me on there. That's why they give away so much for free. And that's why they've been struggling with monetizing their platforms. 

But the other thing to understand is that people are like, no, no, no, when new data comes in, it goes into a warehouse. And I was having this argument with this gentleman. He was like arguing that no, when new data comes in, like when I use it, the data goes into a warehouse somewhere and somehow, they incorporate that data into the next model. Going from chat 3.0 to 3.5, and so forth, and so on. 

And I go, no, no, no, no, no, that is not how it works. The data that you put in there is changing the artificial intelligence in the moment that you are engaging with it because it is learning from you. How do you think it learns your preferences? It is learning in the moment. 

But that data also goes in because then the organization goes, OK, what do we have to change about the underlying algorithms? What do we have to change about some of the types of additional human reinforcement training we have to put in there to create even a more enhanced form of synthesis that's out there? 

And so I said it's both/and. Every engagement that we have with artificial intelligence changes the artificial intelligence. And that's the danger, again, there is-- short of the companies like Anthropic has specifically said, we are trying to build a better, more ethical version of artificial intelligence. And they try very hard in their human reinforcement training. 

But the level of quantification that happens within artificial intelligence, the level and volume of human engagement, is so great, none of us can really, truly control what's happening there anymore. And that's the danger. 

As a society, we're in this hyperpolarized period, where we are becoming more fractured as a society, where people are seeing-- rather than feeling safer, they're feeling far less safe. They're operating more on fear. And that is going into what our artificial intelligence learns. And that's the scariest part for me. Is that all these news articles that we see every day, that is training artificial intelligence right now. 

Jill Finlayson: Yeah, I think it really argues for the importance of having objectives, having clear metrics that this is what a successful AI is from the start, protecting safety and privacy from the get go, but then also continuous monitoring because of the fact that new data is coming in. And we don't want it to reflect the baser instincts or people's deliberate attempts to drive things in a certain direction, it is a dangerous period of time. 

And I just want to do a quick shout out because we talk about who's developing AI. And we know that development teams are skewed male. But I want to shout out to the AI ethicists. For decades, they've been sounding the alarm that you're talking about here. So Sophia Noble from UCLA wrote Algorithms of Oppression and pointed out that Google isn't a social good, it's a company driving profits. 

And so what happens in Google Search, and now what happens with the Google AI, is driven by a profit motive. Caroline Criado Perez wrote Invisible Women on how many things in our world are designed specifically for men, and therefore don't work as well for women. Fei-Fei Li, in her book, The Worlds I See, is focusing on how do we make AI a force for good? And of course, Dr. Joy Buolamwini has done tremendous work unmasking AI and bias in facial recognition. 

And going way back, Cathy O'Neil was out in front with her defining book, Weapons of Math Destruction. So I wanted to point out that oftentimes we hear that there aren't enough women in development, and I agree with that. But there are a lot of women in AI. 

Bo Young Lee: Yeah, absolutely. And I think something to keep in mind is that while women only make up about 27% of the overall AI workforce, and even sadder, only about 15% of AI researchers are women, women make up between 70% to 80% of all AI ethicists. And I actually see this in my current master's program. 

We have about 50 students in my cohort. 40 of them are women. And you might ask the question, well, why is it always the women? Especially, like almost every woman you just named is a woman of color. Why are we the ones who are sounding the bell?

And it's because we are the victims if AI doesn't get things right. 

We are the ones who have to live with the decisions that artificial intelligence. And one thing that we are seeing happening in the artificial intelligence space is AI being utilized to further erase women across the board and especially within the tech sector. And it's happening at the human level as well. 

A couple of years ago, The New York Times published a story and it was titled the who's who shaping artificial intelligence. And of the about 30 people on that list, there was not a single woman on that list, not a single woman. And this is when Mira Murata was the Chief Technology Officer at OpenAI. She wasn't on the list. 

Fei-Fei Li, who literally created ImageNet, who without the work of Fei-Fei Li, we would have no graphical artificial intelligence, there would be no AI-based image making, there would be no AI art without Fei-Fei Li, she wasn't on the list. 

So we're seeing artificial intelligence being utilized to further erase women from technology. And this is something that's gone back, the women-- if we go back 70, 80 years, the women of Bletchley Park who worked on cracking the code during World War II, they were erased from that narrative. 

We see that all the women who are the original computers, they were all erased. So women were being erased from every aspect of computer science. And now, they're being erased from every aspect of artificial intelligence. And they're being replaced by these hyperstereotyped females by artificial intelligence. 

Jill Finlayson: Yeah, it's tragic. And it's critical that we address these issues. I think for us who are in the industry and talk about these things, there are a lot of examples of where AI and systemic bias kind of interact. And so for the folks who are listening, I think it's worth giving some more of these very specific examples. Because it's one thing to say bias in data sets is bad. But it's another thing to see the implications of it. We'll just do sort of a lightning round of how is AI impacting hiring? 

Bo Young Lee: Yes, absolutely. So Bloomberg News ran-- did a really interesting study, where they ran-- there's a classic experiment where people would take resumes, identical resumes, give one resume a archetypically white name, and give one resume in archetypically Black name, and then send them out to employers to see what the response rate was. 

And in those classic experiments, we saw that the resume with the white name always got more responses than the resume-- the identical resume with the Black name. So that's the classic experiment. Bloomberg wants to see if that sort of bias existed within artificial intelligence. So they ran thousands of iterations with a very similar premise, get identical resumes, put a white name on it, put a Black name on it, run it through a system. 

And the experiment that they were looking for was let's see if there's any bias in these resume review AIs for financial analysts. And sure enough, when they ran thousands of iterations of this, they found this huge bias in what these AI systems were recommending. So they found that these resume AI agents were recommending the white resumes, the Asian resumes, and in particular, the Asian female resumes. 

So if they had a name that was Asian female, that was the most likely to get pulled out. Mind you, the resumes were identical. So whether they had a Latino name, whether they had a Black name, Asian name, white name, female name, male name, they were all equally qualified. Yet something like 40% of all recommendations were Asian. And then like 38% were white. And something like 10% of the resumes that were recommended by these AI resume agents were Black and Latino. 

How did it make that choice? All the qualifications on the resumes were identical, and if the AI wasn't biased, it should have had 20% Asian, 20% white, 20% Black, 20% Hispanic, 20% other. But it wasn't at all. So it was making an assumption based on the name. 

And why would it make the assumption based on this name? Well, we know that all large language models have a negative association with African-American Vernacular English. If you engage with any large language model using AAVE, African-American Vernacular English, it will make an assumption that you are stupid. This has actually been proven. 

And so by extension, if that's what the LLMs have learned, then by extension, any agent that is based on these large language models, which all resume reviewing AI agents are, they're going to then take a name that is reflective of African-American Heritage and think negatively of it, even though, objectively, the skills are exactly the same. So that's one form of bias that is absolutely negatively impacting Latino and Black job seekers if a company uses an AI agent. 

Jill Finlayson: And the story I heard was, of course, Amazon AI's hiring tool that was trained on a majority male data set and immediately began penalizing women. And they found patterns that even people wouldn't have found. Like, oh, softball, bing, you're a woman. And it couldn't unlearn this. So you couldn't adjust the AI to become more fair once it saw that pattern. 

Bo Young Lee: The sad fact is that once an AI has learned a certain bias, it can't unlearn it. Just like how hard it is to untrain a human who has a bias, it's the same thing for artificial intelligence. Now, the good thing is that you can shut down an artificial intelligence and not use it. But how many people are willing to spend millions, billions of dollars building an AI and then shut it down once it's shown itself to be really problematic? 

Jill Finlayson: Yeah, to it's credit, Amazon did end that tool. But we've also seen hiring use-- like HireVue used video. And there have been some problems, of course, with facial recognition. What are you seeing with that in hiring? 

Bo Young Lee: From a facial recognition perspective, Dr. Joy Buolamwini, she's done a ton of work on this. It's simply the fact that artificial intelligence can't distinguish dark features. And so if you have an interview happening with an artificial intelligence agent or an artificial intelligence agent as an intermediary there, to what extent is it just going to confuse darker skinned candidates and just mistakenly place in the responses from one Black candidate to another Black candidate? 

And while facial recognition, particularly, is problematic from a hiring perspective, it's even worse from a criminal justice perspective. We know that-- we don't have any data on this, but we know that a lot of police forces across the United States are starting to use facial recognition technology to identify potential criminals. 

Right now, there are no disclosure laws when a company is using artificial intelligence, whether it is the police force using it for identification of potential perpetrators, whether it is a company using it for resume review or performance review. There's no law that dictates any kind of transparency about it or accountability about it. 

And this has been in the news very recently. Insurance companies are now using artificial intelligence to make decisions about whether or not they deny claims. Yet, they're not telling people-- when people are like, oh, why did you deny this claim? They're not saying, well, it was artificial intelligence. 

And this brings up a whole question around accountability. Who is held responsible? Who is held accountable when artificial intelligence makes a decision that has a material impact on the quality of our lives, and the outcomes, and the way in which we live? And this opens up a whole can of worms around this question of what happens to human autonomy as artificial intelligence plays a larger and larger role in the choices that we have? 

Jill Finlayson: Yeah, I think related to this, it is a fairly heavily regulated industry. And so they do have to explain their decisions. But the information leading up to it going to the person could be biased in those ways. 

Bo Young Lee: Yeah, absolutely. That is the real true concern. It's like artificial intelligence and algorithmic bias is pervasive in our society. If you have a smartphone, if you use apps on your smartphone, there is, somewhere in the background, an artificial intelligence is making a decision on your behalf. 

Whether it is the price that you get when you're calling a Lyft or an Uber. There are all sorts of rumors out there where people are like the type of phone you have influences the price you get on your next Lyft ride, right? Or whether it is the price that you see on Amazon for a book, whether it is the feed that you get on Instagram, all of that is being influenced by artificial intelligence. 

And you don't know how your decision making is being influenced by artificial intelligence. For example, I have noticed that ever since Meta decided that they were going to take away professional content screeners and basically just have community screeners, and also after Mark Zuckerberg said we need more masculine energy in this world, I have noticed that on my Instagram, I get a hell of a lot more tradwife influencers on my feed than I have ever wanted to ever see. 

And there is nothing about what I view on Instagram that would tell you that I was somebody who wanted to see tradwives. 99% of my Instagram feed should be cute babies and dogs. That is what I use Instagram for, dog influencers and cute baby influencers. That's it. 

But suddenly, I started seeing all these tradwives. And it's because the artificial intelligence was like, here's a middle-aged woman. She should probably be watching a bunch of tradwives because she's a little bit too independent. Some people ask me, when I started transitioning more and more towards artificial intelligence ethics, they're like, well, why are you doing that? 

And I said, it's because every bias that we have seen socialized that individual people can inflict upon somebody else that I have been working on for two decades, it is now being quantified at quantum levels into artificial intelligence. So that's the challenge of artificial intelligence is that there's no transparency in artificial intelligence. 

And any good AI operator will tell you, we don't really understand how it learns. We just know that it learns. It's kind of like a big black box that's out there right now. We don't know how all this social bias that is there impacts the artificial intelligence. But we know that once the artificial intelligence becomes biased, it can discriminate at a level that no one single human or even a system of humans is capable of doing. 

Jill Finlayson: So it's not neutral. And it's leading to, I think you said earlier, stripping autonomy. Can you say more how it's taking away autonomy? 

Bo Young Lee: Absolutely, I'll use a really simple example that almost everybody can relate to. So I don't know if I'm the only person, I don't think I'm the only person who does this. When I have an Amazon cart, I go in and I put-- at any given-- it's 2:00 AM, and I'm dealing with insomnia, so I go into my Amazon, and I choose 20 things. And I'll put it in my cart. 

And then my cart sits like that for a few days. And I come and go and visit my cart every once in a while, going, do I really want this? And then after about five days, I'm like, I don't want anything that's in my cart. And I delete everything except for maybe one thing. And I place an order for that one thing. That's usually how I engage. And I think a lot of other people do that as well. 
Well, if you have an AI bot that is designed to incentivize you to buy everything in your cart, you don't know that the ads that are being put in front of you every day through little notifications, through that sidebar is all linked back to your AI cart. And let's say Amazon has a AI agent meant to increase your purchasing on the platform. 

You don't know whether or not you suddenly decide, oh, I do actually want all 20 things that are in my cart, you don't know if that's actually a choice that you made, or if that's something that through micro, micro influencing of what you see on a daily basis makes you buy more. 

And you're like, well, that's not very harmful. At some point, you did decide that you did want those 20 things. So it's just making you buy it. That's one instance. Over a lifetime, I might consume so much more and spend so much more of my own money. And then my house is filled with all sorts of knickknacks that I don't really need. 

There is this incentive when we start to use agentic AI, and I read a report from Google, they said that they think that in the next five years, there will be over 100 billion AI agents. And that every person in this world will have anywhere from three to five agents working for them. What happens with all the hypernudging that these AI agents do to get us to buy more, to get us to influence what news articles that we see? And we have no say in what these hypernudging moments can be. And to what extent does that strip us of our autonomy as human beings? 

Jill Finlayson: And how do you opt out if you don't want these? It's not going to be very easy to do that.

Bo Young Lee: You can't. You really can't opt out. Think about the way we used to read newspapers in the olden days, right? It was a physical paper. And we had the choice of flipping through all of it and going to the sections that we want to. Now, if you go to The New York Times, there is no way to bypass the front page. 

You must see the front page of The New York Times before you can navigate to your section that you want to read. Therefore, every time I go into The New York Times, or The Wall Street Journal, or whatever, or The Washington Post, I get pissed off every single time because I am-- whether or not I want to stay informed about what's happening in the United States, it is just put right in front of me. And then I get upset. 

And then I have a heartburn. And then I forget that I was there just to do the Wordle and then leave, right? When we start to automate how things get done, we take away the ability to see things that we don't want to see. 

Jill Finlayson: It's an interesting problem that we have. Even when you do a Google search, it tries to autocomplete and guess what you're looking for. And here's the thing, that actually influences people. So it makes things look more popular or more likely than it would otherwise because it's pre-populating. And people are like, that wasn't what I was searching for, but I'm curious. So they click through on it. It is changing your trajectory. 

Bo Young Lee: Even something like Apple Music, we know that there are companies out there, music companies who are really promoting an artist, and they will-- there was that famous case a few years ago where for one of the new versions of the iPhone, it came pre-loaded with the new U2 album. And what happened is it completely backfired because everyone was like-- and there was no way to delete that U2 album off of your music app. 

And people were pissed. They were like, I don't want this U2 album. Why is it coming pre-loaded on my phone? It's not my music style. And eventually, Apple pushed an update that you can remove the album. 

But nowadays, I go into Apple Music. And it's like, we think you'll love this new music. And it's the first thing I see. It's not even like Bo Young Lee's station. It's not the most frequently listened to on Bo Young's phone. It's like the top button is like this artist I've never heard of. And then every once in a while, I'll click it. And I'll listen to it. And 99% of the time, it's not music that I particularly find interesting. 

But what happens when you're constantly promoting things because some corporate entity wanted you to see that thing. It will ultimately influence you. And yes, we're talking about little things like what I buy in a cart or what music I listen to. But think about it from a teenager's perspective. And we know that in general, older adults are much more leery and wary of this technology and are much more capable of questioning how this technology is influencing them. 

Teenagers, younger people who are growing up with this technology, they have no ability to question the bias that's built in there. And no-- they don't simply have that experience to say, oh, they're pushing a preference onto me that I don't want to have. And so think about if we have all these biases about the mental capability, the academic capability of certain races and genders, what happens to the academic choices that are put in front of young people? 

What if we start to see Black people being steered further and further away from rigorous academic education and more towards vocational education, while white and Asian individuals are being promoted to the Ivy Leagues and top 20 universities? That is a real possibility at some point. It may be happening right now, for all we know. 

And then that systematic bias ultimately becomes epistemic violence, where we are limiting educational opportunities, career opportunities based on what the artificial intelligence believes of certain populations. 

Jill Finlayson: This is a great full circle moment, because if we don't have the diverse collection of people coming into technology and in solving these problems, we're going to have more problems in the future. Any thoughts about how we could use AI to nudge for good? And who decides what good is? 

Bo Young Lee: I mean, ultimately, I think it comes down to the companies themselves. This is why diversity of workplace is so important. And I gave you the example of the Optum Health, where they actually knew that there was bias in the data. So they tried to build it at the algorithm and removed that bias. And still yet, the bias showed up in outcomes and then they had to fix it. No matter what your best intentions, you could never build an algorithm that isn't going to potentially become biased based on the data set that's there. 

But what you can do is you can hire people who are already aware of the risk that's going to be there, more women, more people of color, more LGBTQ, more people with disabilities, more people from lower socioeconomic backgrounds who have lived as victims of the social bias, and can therefore then, as much as possible, both build algorithms that take into consideration, ensure that the human reinforcement training is really, truly addressing that, and to try to clean the data as much as possible to ensure that the data being utilized isn't biased. So that's why you need a diverse workforce. 

Jill Finlayson: Absolutely, and so what would be your final words of advice to the individual contributor who's out there, the leaders who are out there? How can they best thrive in this workplace and ensure that bad things aren't happening under their watch? 

Bo Young Lee: Well, first and foremost, I think it is imperative that every person becomes much more sophisticated, both about what AI tools are out there and how they're influencing their lives. So everyone needs to, whether it's go and get a certificate, whether it's self-learned, learn how to use these tools. 

Secondly, when you are using the tools, don't accept everything at face value and just be like, well, the AI told me that. If you are using it for research, make sure that AI is not giving you any made up research sources. This is actually a huge problem. We know that AI has the ability to hallucinate. 

And people will oftentimes use research given by the AI, not realizing that there are some made up resources out there. So approach artificial intelligence with a huge level of salt. Just make sure that you're questioning everything that comes out of there. And then if you're an individual contributor in a company that is using artificial intelligence, ask the simple question, what are we doing to ensure that the bias from the artificial intelligence isn't built into our products and services? What accountability have we built in there? 

Whether you work in the insurance industry, whether you work in consumer packaged goods, you're all starting to use artificial intelligence. Just ask that one question as an IC. What checks and balances have we created to make sure that we don't allow artificial intelligence to make mistakes and not be held accountable for it? 

If you're an executive leader, I would ask you to simply not adopt AI again without first making sure that your organization is building those checks and balances. Build the checks and balances first, then bring in the artificial intelligence. Don't bring artificial intelligence first, allow for mistakes to happen, and then go, oh, we need checks and balances here. We need some kind of accountability. We need some kind of way to make sure that there is no bias in outcomes for what we're utilizing. 

And for the operators out there, I would actually say to them, just because you can build it doesn't mean that you should be releasing it. I would say that when OpenAI launched ChatGPT to the public in November of 2022, it was probably really too early. It was too early. 

ChatGPT simply wasn't ready yet. But they released it. And then everybody else felt like they had to release it as well. And we've seen that there's been a lot of problems since then based on that early release. So I would say just because you can build it doesn't mean that you have to release it immediately. 

Jill Finlayson: Thank you so much. This has been a great primer on AI and its interaction with DEI, and in fact, our own behaviors in the real world. Thank you so much for joining us, Bo. 

Bo Young Lee: Yeah, thank you. And thanks for listening. 

Jill Finlayson: And with that, I hope you enjoyed this latest in a long series of podcasts we'll be sending your way every month. Please share with friends and colleagues who may be interested in taking this Future of Work journey with us. And make sure to check out extension.berkeley.edu to find a variety of courses to uplevel your AI skills and certificates to help you thrive in this new working landscape. And to see what's coming up at EDGE in Tech, go ahead and visit edge.berkeley.edu. 

Thanks so much for listening. And I'll be back next month to discuss one of the top learning and development trends, skills agility. Until next time, the Future of Work podcast is hosted by Jill Finlayson, produced by Sarah Benzuly, and edited by Matthew Pietro, Natalie Newman, and Alicia Liao. 

[MUSIC PLAYING]