The Future of Work Transcript: The Democracy of Data

Read the full transcript from this podcast episode

[MUSIC PLAYING]

Mia Dand: How do we distinguish between misinformation, disinformation, and the credible data out there? Because there's so much information that's been created, every person is generating more data and creating more content, how do you make sense of that all? So that's a huge issue, and a bigger issue, as it's turning out to be, and it has real consequences for democracy.

Jill Finlayson: Welcome to the Future of Work podcast with Berkeley Extension and the EDGE in Tech Initiative at the University of California, focused on expanding diversity and gender equity in tech. EDGE in Tech is part of CITRIS, the Center for IT Research in the Interest of Society and the Banatao Institute. In this episode, we look at data.

There is no ignoring it, data is driving business forward. Data informs good decision-making. Data can promote greater sustainability.

And as a result, companies are empowering, and even expecting, more of their employees. They want them to explore, understand, and communicate with data. As we look at the future of work, every role at every level of an organization will need data literacy skills. And with those skills comes a strong need to ensure that we mitigate bias in the data sets we use.

Data is the very foundation of technologies, like, artificial intelligence. How far and wide will the gates to data be thrown open, and why does it matter to you? To explore the role of data and ethics for you in the future of work, today we talk with Mia Shah-Dand.

Mia is the CEO of LightHouse 3, an emerging Technology Advisory Firm, and founder of the Women in AI Ethics, a global nonprofit initiative to increase representation, recognition, and empowerment of women working on ethical and responsible AI. She created the first 100 Brilliant Women in AI Ethics list in 2018, which is now published annually, and an online directory to help hiring managers and conference organizers, too, recruit more multidisciplinary talent.

In 2022, with the support from the Ford Foundation and Omidyar Network, her nonprofit launched the, I Am the Future of AI campaign, to highlight nontraditional career paths into AI, encourage more folks from nontechnical backgrounds to join this space. Mia is also on the Advisory Board for Carnegie's Council on Artificial Intelligence and Equality Initiative, and has hosted hundreds of programs to share groundbreaking work by diverse experts to lower barriers to tech for underrepresented groups, and to democratize access to AI literacy and education.

Welcome, Mia, I'm so excited to chat with you today. I would love for you to tell us a little bit about what brought you to your own career path. What did you want to be when you grew up?

Mia Dand: Thank you so much, Jill, for having me today-- very excited to be here. My status is an outsider in the tech industry. For a lot of folks, for women, when we talk about sexism, the first time they encounter sexism is outside in the workplace.

They are maybe at their school. That's the first instance that they're encountering it. But I grew up in a very traditional family. I grew up in India.

And in the environment I was brought up, the first instance of sexism that you face is within your own household. So I grew up in an environment where men were cherished, when men's voices were heard. Women were always playing a more supportive role. It's-- we are supporting them, we are supporting others, taking care, we are the caregivers.

So as I was growing up, I didn't have a lot of options. I didn't have any career ambitions. It was when I immigrated to the United States-- I was married at a young age. I moved here and I realized that I started seeing so many parallels between where I grew up, the lack of opportunities for women, the underrepresentation.

I saw a lot of that being mimicked in this developed country. And I saw that a lot of that is just exacerbated in the tech industry, which just tends to be so homogeneous, even today, that have so many amazing women in this space but you never hear their voices. So I'm taking a lot of those learnings from a very young age what I have personally experienced, taking those, looking at what's happening in Silicon Valley today, what's happening in the tech industry, and using those own experiences to make a change.

So the work I'm doing today is in the space of responsible AI. My company LightHouse 3 focuses on helping large companies adopt and deploy technologies in a more meaningful, in a more responsible way, but also making sure that we are always centering the experiences of women and people of color who are outside of these spaces. Because we are in a super bubble right now.

It's the tech academia super bubble. You have to either be very privileged, you have to have these advanced degrees, or you have to be network connected. You're typically a white male who's being heard. So my goal, my mission is to change that, have more diversity, not just skin deep but diversity in the true sense where people from multidisciplinary backgrounds are being heard, women and people of color from socioeconomically disadvantaged groups are being heard so that it doesn't become just an echo chamber of people with access.

Jill Finlayson: So as an outsider, how do you pierce that super bubble? How do you get in and how do you have credibility when you start to have these conversations?

Mia Dand: Great question. So when I landed in Silicon Valley, my first job was at eBay. The landscape was so different. This is a time when Jeff Bezos was still building his warehouses, like, no one's paying attention to Mark Zuckerberg at this point.

What are the implications of all of these developments? So as my status of being somewhat of an insider, but also as personally as a woman of color, as an immigrant, as a non engineer in this space, I also got to see firsthand the sexism, the racism. A lot of men who were in the high ranking positions at that time-- there were very few women-- and they all had stay-at-home spouses.

They were all in these roles, and their women were just doing this unpaid labor, which is fascinating. And I'm talking about 15 years back, and you fast forward and pandemic, it's all been blown open. And these are conversations we're having every day about unpaid labor and women not being heard, and women having to do bulk of caregiving.

How do we move this forward? And the work that I'm doing within my organizations with Women In AI Ethics, which is an initiative I started to give women a seat at these conversations, elevate these voices, also, I believe, is the way that we can get more representation in these spaces where, traditionally, we just had a wall. We're just staring at this blank wall.

It's like, can you-- and it's high, right? It's not just a wall, it's like it's a high wall. And then we make it harder, and harder, and harder for women and people of color to climb it.

Jill Finlayson: Can you tell me in your own words why are data literacy skills becoming more important?

Mia Dand: It all started-- again, about we're talking over a decade back when social media took a lot of folks by surprise. And we're talking millennials, and we talk about Gen Z, who have grown up with this technologies, they had-- literally, had a powerful AI tool. Their iPhone had the fingertips of the smartphones, right?

But when we got started with social media, it was a tsunami of information. And it's so ubiquitous today that everybody's sharing and posting selfies. We're talking about a world where none of this existed.

It would be so silly to take a selfie, I mean, say, 2003, for example. You fast forward, like, who isn't taking a selfie and posting it out there? That pretty much opened up the floodgates.

Organizations were struggling with a lot of data influx, but they were gathering information from their customers. But social media just took it to a whole another level. There is so much information just coming and flowing through different channels.

We have social media, the digital. How many platforms do we check every day? You get up in the morning, you're looking at your emails. My email inbox is such a mess right now, I'm scared to go in there.

And you don't have just one email, right? You have your personal email, you have a work email, you have multiple emails that you are tracking. On top of that, you have your social media channels. So you have information coming in through Twitter, LinkedIn, Facebook, Instagram, TikTok, if you're on TikTok.

So you have this information tsunami exponentially greater for organizations that are facing this tsunami of information and trying to figure out, how do we manage this?

Jill Finlayson: I, too, worked at eBay in the early days and saw the same thing, right, of the very sort of beginning of data-informed companies. And eBay was one of those that was using tracking by the minute how many listings there were, what listings were growing, which listings were growing faster. And so that's an example of the company being informed by data and being able to prioritize and shift roles and goals.

But you're absolutely right, on the private side, as well. I agree email's broken. I get all the other channels, like Slack and-- in addition to email. And so I think we're ripe for innovation in that space.

But to your point, individuals need to have data literacy, not only when they're interfacing with all of these social media channels, but when they're evaluating data, right, when they're evaluating the stories that they see-- is that from a reputable source or not-- because of so much disinformation that's out there. So I agree, right, data has just become both part of your personal and private life. And now, it's going into something else. It's going into artificial intelligence.

So data is the fuel, or the foundation, for a lot of algorithms that are going to be making decisions for us in the future. Can you tell people briefly what machine-learning is, what artificial intelligence is, and what is that connection to data? Help us understand and connect the dots there.

Mia Dand: There was not just a crisis in how is this data managed, how do we wrangle all this data personally, organizationally, but also, how do we distinguish between misinformation, disinformation, and credible data out there. Because there's so much information that's been created, every person is generating more data and creating more content, how do you make sense of that all? So that's a huge issue, and a bigger issue, as it's turning out to be, and it has real consequences for democracy.

Our inability, if you will, like, not being able to pass that data and which is why data literacy, just information literacy, is becoming more critical. So speaking of artificial intelligence-- and artificial intelligence has been around for a while in different forms. The Turing test, for example, it's mimicking human beings, and then it goes through this winter when there's no funding going, and so on.

So again, we are in the AI hype cycle right now. So artificial intelligence, as a discipline, as a industry, is-- the AI itself is more hype, which you look at-- peel the layers and you look at what is underneath, it's just machine-learning models. Machine-learning models is, literally, you're programming a computer to do task.

You're teaching it to behave more like a human, which is the artificial part of intelligence. We all know that it's not as intelligence as it sounds, but that's more about marketing side. But actual-- the engine of machine-learning runs through data.

When a machine-learning model is created, it is, literally, somebody who is making decisions about, how do I create this model, which the basis of it as a statistical model is, you have a statistical underpinning of an algorithm that you're creating. It's a rule-based system.

Machine-learning is a rule-based system where you're putting in on one hand, your decision, your intelligence around what is the problem I'm trying to solve algorithmically, like, what kind of model do I build that is going to help me solve the problem? Am I doing prediction, am I trying to classify something? Like there's so many different uses through machine-learning where you're programming your computer to do this task.

But it's really up to the person who's designing this to decide what are the parameters? What is going to go into this model? What am I not going to put into this model? And then the model is trained on data sets, and this is what we're hearing a lot about bias in data sets.

Now, where are these data sets coming from? Data sets come from a whole variety of different sources. And I just finished reading a paper by Alex Hanna at DAIR Institute, which was founded by Dr. Timnit Gebru and Emily Denton, who wrote this paper about how the data sets are concentrated in the hands of just a handful of companies, the companies and institutions, and keeping-- Stanford is one of them, Google is one of them-- that they control what goes into this data.

So we have the data, you're training your models on data, you're testing it, and then you're trying to use it against your real world problems, your internal data. There are so many ethical issues that come into the space, right? From the time that you're deciding what problems do I choose to solve?

I'm doing something completely frivolous, which is just going to fill-- meet the needs of some rich billionaire, like, the donors, or are you doing something that's actually going to change the world? Are you using [INAUDIBLE]? And then as you're deciding like the team was putting together these models, what is their rationalization, like, what are their biases? What are they thinking about when they're putting together this models and, thereby, seep into it?

The data sets themselves are problematic because the people are putting together those data sets are also biased. So you see that every step of the way, there's so many ways that biases are just seeping into your models, which is why we're seeing so many experts in the space, which tend to be a lot of Black women, women of color, people of color, raising the alarm saying, these models are flawed. They're amplifying and they're repeating biases which exist in society.

Racist, sexist, socioeconomic biases from our society are now being amplified through the system, which is why you're hearing a lot of, as there should be, there's a lot more awareness, there's a lot more alarm bells going off in a lot of spaces about why this is going to be dangerous for our society, and also dangerous for democracy, and humanity overall.

Jill Finlayson: think it's really interesting to understand the importance of raising the alarm, right, the importance of explaining these problems. So recently, I've been looking into this topic-- and I'm not an engineer, either-- and this is where it's important to realize you don't have to be an engineer to understand what is occurring and to play a part in improving the outcomes. I was looking at the health sector, right?

And what we have to understand to your point is it's not intelligence, it's not sentient, right? It is pattern recognition. And so what we're looking at is this opportunity to look at what is the objective. And that doesn't require, necessarily, technical expertise, that requires thoughtfulness and being able to identify, what are we trying to measure?

And in the health sector, the example I heard, was looking for, who needs health care. Right? Who do we need to serve? And they looked at, as a proxy, cost. Who are we spending a lot of money on?

But it turns out, a lot of people don't get the medical care that they need and, therefore, they don't show up in the cost, but they have the health issues that they need to address. And similar to what you were saying about where the data sets come from, it turns out in medical science, we're looking at 60% to 70% of FDA-approved products being tested on communities from Palo Alto, Cambridge, around the clinics that are, basically, wealthy and privileged people.

So the algorithms are being tested on people of privilege and not necessarily a diverse group of people. So when we think about who can raise the alarm, who can do that, and how do they get into these types of roles where they can influence the data and the use of data?

Mia Dand: I do want to talk a little bit more about what you just mentioned about health care. Because there have been studies-- there were studies that are starting to show that how harmful these systems are because they're based on historical data. So if your society is already being prejudiced, and you already have a history of discrimination against people of color, poor people, people are economically disadvantaged, if you already have those, all of those are still going to influence your models if you look at cost as an indicator whether or not that a person needs health care.

It completely ignores the fact that, historically, Black people and poor people, who are socioeconomically disadvantaged groups, have not had access. The empathy that's missing is when people building these systems have not faced this economic challenges who come from privileged backgrounds, like you said, that these challenges, these issues, are going to be overlooked. And the reason why data is coming from these very specific neighborhoods because the schools and the tech companies which are gathering this data live, and they're based in those very high profile neighborhoods.

So to that point-- so how do you raise alarm? How do you raise awareness? It's-- we've had a lot of brave folks do that. There have been whistleblowers.

Dr. Timnit Gebru is such a great example that when she saw that these large machine-learning models that built Google have an environmental impact, these machine-learning models are resource intensive. They have a large environmental impact, large global carbon footprint. How do companies respond when these issues are being raise?

Not very well, unfortunately, as we have seen with her firing-- Margaret Mitchell's firing from Google. And there is an active effort to squelch that. And there lies the challenge today, which is the power dynamic, the imbalance of power between the tech companies and the researchers that they are funding with their deep pockets.

They have an intrinsic interest. Why kill the golden goose, right? They want to keep that going. So they're not going to raise awareness.

And the folks who are more ethically aware, who are ethically more responsible, are going to raise awareness. And those protections just don't exist for those folks like Dr. Timnit Gebru and others of the world. So there has been an active effort, whether it's in the form of unionization, whether it's legislation which is getting rid of the NDAs to protect these women.

Also, there is a tech worker handbook for how to protect yourself. Ifeoma Ozoma has been very instrumental in this space, who's been a leading voice in the space. She created a free tech worker's handbook, which tells you how do you resist, how do you whistleblow, how do you protect yourself.

So there are women leading the charge who are at the forefront. And the very least we can do is invite them, invite them to events, invite them to podcasts like this one and give them a platform, give them a voice. And that's what we try to do.

A lot of work that we do with Women in AI Ethics is raising and elevating-- raising awareness, elevating voices, and giving a platform to the voices which are not generally being heard and, also, actively being retaliated against by the powers that be.

Jill Finlayson: So what caused you to start a nonprofit called Women in AI Ethics?

Mia Dand: Another great question. So in 2018, if you remember, the Gender Shades paper came out, which was Dr. Timnit Gebru and Dr. Joy Buolamsini, who is at MIT. They recognized that the facial recognition systems are being tested on light-skinned folks, so it does not work for folks with darker skin tone.

And I was doing some research in this space, and I realized there's so many women working in the space, and yet, AI, a tech industry, has just become the space where the stereotypical AI expert, or tech expert, is always this engineer. He tends to be white and male, and that's what media is telling us, that's what we're made to believe, that's how the system is set up. So I set about to change that.

So I published the first 100 Brilliant Women list that year, which included their voices, included their names, and also gave recognition. And it raised awareness about the other 98 women working in the space. And so many people were surprised. They were genuinely surprised, like, who are these women? How have we not heard of them?

They've always been there, they didn't just show up. I did not just discover them sitting on some-- a mountaintop. They were here in the industry. They've always been there.

It's just that no one took the time to recognize them, elevate them, because if they were working on something that's going to make people billions, that's one thing. Everybody's going to be talking about it, oh, look at this person raised billions, right, because-- or millions because that's what Silicon Valley values. That's what media values.

But when the women will say, wait a second, this is going to be harmful to people of color, this is going to be harmful to a huge segment of society, or this is not good for humanity, there're not as many media folks or people in the industry who are willing to hear that. So that, I feel, just started that movement. We are seeing more of these women being invited to talk, and they've also gone on to do amazing things. Like I said, Dr. Timnit Gebru has her own Institute now, and the Algorithmic Justice League is just growing leaps and bounds.

So all these women have, also, come a long way, and we are just trying to encourage, inform the world about this next generation. There are more women. Every year, we publish this list, New 100 Voices. We never repeat the names because there are many more women we want to recognize in this space.

Jill Finlayson: That was my favorite thing about the list. So I remember when you sent out the call. You said, hey, I'm looking for a list. I saw this panel, it was all men. I know the women are out there, send me their names.

And so I sent in the half dozen or so people I knew off the top of my head, and I'm like, here are some great people. But I believe, and correct me if I'm wrong, that you crowdsourced over 100 people in the first 24 hours. It was very fast.

And so this idea that you can't find women and diverse people to speak on these topics, I think, was kind of blown out of the water just in the rapid speed by which-- how many people are on the list today?

Mia Dand: We have an online directory, which has grown over-- we have at least 700 names and every day we are adding more. We just put a call out because we will be publishing the next list in December, like we always do. So we do have a call out right now.

So it keeps growing, and that's exactly to your point. They are out there, all they need is people to recognize and let the voices be heard. That's all they're asking for.

They're doing all the hard work. Our job is easy. We have to just listen and invite them, right? But easier said than done.

Jill Finlayson: So how do we help people, let's say, get on that list? So let's talk about nontraditional paths into data jobs. What are some of the ways if you are a social science, or humanities major, or maybe you haven't gotten your college degree, you've got-- or you have an AA from a community college, how can you move into this space?

Mia Dand: I love this question because it's something that has-- as an outsider, I have been working on. I feel like my entire life has been leading up to this moment where I can say, yes, you, too, have a state-- you have-- you belong here, your voice matters. And this is-- like technology and tech industry and AI is for you, as well.

I've been doing a series of interviews and-- over the years and just been fascinating. And that's the favorite part of the work that I do is talking to women who are working in this space. I'm looking at where did they come from, what is the background-- like what is their background?

So fascinating interview I had recently on this conversation with Beth Rudden. And her background is-- she works at IBM, she used to, and I think now she's with another company-- but she is an archaeologist and anthropologist. So-- and she works at IBM in an AI role.

And her engineer and engineering colleagues come to her all the time for-- they call it the squishiness because she adds that human dimension to what they're doing. Because when you hire these-- traditionally, you hire engineers and computer scientists, and it's all about getting the job done. You need that human lens, the human factors lens, which is telling you, how does it impact society?

How does it impact humanity? What about these people who don't look like you, who don't come from the same background as you? Because we have to address the fact that people's biases come from, also, their lived experiences. And when you don't include these women in these spaces, it's very hard to avoid some of these blind spots. So that was one example.

And then I talked to Michelle Carney-- and I do want to introduce her to you-- and she works at Google and we interviewed her. And she mentioned how she couldn't get any jobs because she kept getting rejected for jobs because her background is as a neuroscientist and she works on machine-learning and user experience design.

But every time the hiring managers were like, obviously, you don't do both very well, because if you're doing both-- and she didn't have a computer science degree, but she does amazing work in designing responsible machine-learning systems. And she's also a teacher at Stanford Design d.school. So there are so many different pathways you can get into.

There are different pathways into it, and I find there's a lot of value when people leverage their existing backgrounds, their expertise, and their multidisciplinary backgrounds. Because there seems to be a notion that if you want to get into the space, you have to forget everything else and just focus on data, focus on computer science. You have to be a programmer.

But there are so many folks with the statistical background who have been working in data analysis, because data analysts have been around-- as a role has been around for a while because data is not new. It comes in different forms, we have better tools, and we have better-- more powerful machines. But it has been around.

Folks who work in governance and compliance who risk legal and working regulated industries, especially, super critical like financial institutions, they have been working on making sure that their products and services that they're offering are not discriminatory. And we are seeing that machine-learning algorithms for the financial institutions and in the regulated spaces, in the financial industry, are showing a lot of the same biases and discriminatory traits.

I was also looking at folks from product management background. So if you have affinity for product management and how-- because developing these systems, these models and scaling, requires those skills. And last but not the least, business backgrounds, right?

If you need someone who can translate a lot of this jargon into real customer-centric words, I mean, like language, it's like using the words. Because there are times I sit in some of these meetings and I'm listening to these folks talk, the experts talk, and I honestly have to refer to a dictionary. I'm like, what did you just say?

And I consider myself fairly well read. And yet, if I'm struggling, imagine somebody else, and the client's trying to figure out, OK, is this the right system for us? Should we adopt it? What should we know about it?

And they're like no idea because this is all jargon. So we need someone who can translate the jargon, the industry speak, into customer speak. Someone who can relay these concepts easily, highlight some of these challenges. So I do feel there is also an opportunity for folks who come from a more business background, as well.

Jill Finlayson: Amazing. So data analyst, governance, compliance, legal, business, product management and, especially-- even on politics side-- regulation, informed policy, and legislation. So those are different types of jobs. Who are the employers? Who's hiring these people? And is there an entry level job that people should be thinking about?

Mia Dand: I've been reviewing a lot of job descriptions. I also have a newsletter that goes out every week, and wherever I can find great opportunities, I highlight those in ethics. I feel like policy is a huge one, public interest, people who want to work in public interest.

But I would say the baseline is understanding the concepts, that you still have to come in with a basic understanding. You don't have to be a programmer, you don't have to have, necessarily, a statistician, but you need to be able to understand how these systems work, how these models work. And, yes, for those there are entry level jobs, but I do feel like they're still-- we have a lot of work in education to be done in this space.

Because I've seen so many job descriptions requiring PhDs, and so many years of experience for a field like this. I feel like it's unfair by raising the bar so high that you need to have PhD, or you need to have a Master's or some advanced degree to work in this space when a lot of data scientists, a lot of the data-- the job is literally wrangling data, cleaning data. Because the clean data does not exist.

And I think we need to acknowledge that if you're bringing in someone with advanced skills, the assumption is they will get to work on some advanced problems. And that is not the case, and people are not admitting it. And the last thing you want is someone super qualified, super talented, come in and sitting there and saying, I don't have clean data.

And you're spending all your time trying to figure out how to make all the data from different silos work, and how do you clean it, devise it. So I feel like there needs to be some level setting at the part of the employers, as well, so that they also understand that their requirements when they say, oh, we can't find qualified people, maybe your expectations are not realistic given what work you want them to do. And, also, the reality what it actually takes to get that advanced degree because your pool automatically just shrinks.

And it's not a real problem. I feel like it's also a manufactured problem.

Jill Finlayson: So say more about how we can democratize the space, or lower the barriers. Is there bootcamps or, to your point, do companies need to create different roles that can be onboarding into these data cleaning roles before expecting the sun and the moon?

Mia Dand: Absolutely, all of the above, right, so one is realistically looking at what does a person in a certain role do? So we are doing some of that. We're looking at day-to-day activities.

When you say that's a data scientist role, and a researcher role, like, what do you actually do? What does the work entail? So that's one.

More transparency around the actual roles and what people do. And the entry level jobs, like you said, that companies need to be open to opening up those roles, very simply because whether it's internships, whether it's externships, whether it's just a project that you are sponsoring with, not just universities, it could also be with just nonprofit institutions, give people that opportunity. And thinking outside the box because these are not things companies think of. They just assume there's this whole pool of candidates and we have the power, and we're just going to put it out there and just-- we can pick and choose who we want.

And I don't think that's the right approach. Democratization really comes from lowering the barriers from education perspective. AI literacy and education, I feel, is where we-- Women in AI Ethics is focused and my company LightHouse 3 is also focused in that.

It's how do we simplify a lot of this education around AI for people who want to, A, leverage their existing expertise, and relate, and translate offers. What are those skill sets that you can easily transfer? And we talked about some of them.

Like how can a anthropologist find a role here? Like what are some of the skill set? And then, also, looking at how can you translate jargon from scientific and technical backgrounds, because sometimes even that's done intentionally, right?

You want to keep people out, how do you do it, or you just make it more complicated. How about we simplify it so that people actually understand, they can ask the right questions, and then they are not just being more useful to the company because they can sense these issues right up front. Because I feel like nontechnical disciplines just do not have access to the information, and we need to just break down the silos and make it more accessible to them in words using terminology that they are comfortable with.

Like I said, some of the bridges are already being built between compliance and auditing, AI auditing, right? You wouldn't have thought that maybe five years back, but it's a real thing today. AI algorithmic auditing is a thing.

And people who have been doing regular audits are very familiar with how that's done. So why not leverage some of that? So I feel like there is an opportunity to build those bridges, not create more walls, for people who want to enter this space.

Jill Finlayson: I really want to echo one thing that you said and amplify one thing that you said, and it comes from this idea of the anthropologist. So I actually thought of Genevieve Bell from Intel who was one of the first anthropologists hired by a tech company. And this core expertise that she brought to that company, and the lens that she brought, was so valuable.

And I think this idea of cross-disciplinary and bringing people's deep knowledge and expertise, their understanding of problems, different sectors, will make the products so much better, right? And so I just wanted to echo that point that people bring super valuable skills. And if we can now layer in the knowledge around data science, around artificial intelligence, they can be a powerful ally in developing really inclusive products.

People realize it's not that they don't have value, they're not coming from a deficit mindset. They're coming from an asset mindset, like, I have this core competence, you need access to this core competence. If I have this small amount of training, I will be able to bring that value to you.

So I think that that's something that's really exciting. As we look at this going forward, and we're looking at this opportunity, how can people think about how can data help them do their job? How can it help them architect a career path?

Mia Dand: There is so much opportunity out there. It's just a question of, what is your passion, and what is your core competency? Like what are you really, really good at, and what you really want to do with that skill set. Some people are very good at analysis.

Like they can really use data to glean insights and look at patterns and predictions, and that's their thing, that's what they love doing. Others just maybe are better equipped to look at data to maybe tell a story, or they want to relay some information to support their assessment, their hypothesis. They're testing something and they-- maybe they're more research-oriented.

And the data-- or they want to build models, and they want to test, and they just want to get their hands dirty. And they just love going into the systems and looking at how can this data be better, or how do we gather data in an ethical way, and how do we think about consent and privacy? How do we develop maybe policy?

There are so many different facets to it, which is what I find so fascinating about this space. It's just about us having a more open, like, minded approach to this. And I feel sometimes even with educational institutions, and the tech companies, the corporate world, how they structured these problems, or structured these opportunities, I feel like it puts us in a box. And I feel like we have to take back that power.

We should individually take that back and rethink, like, what do I-- like what do we bring to the table? What value does that add? And then using data to support that.

Because I feel like the opportunities are endless. I feel like it should all start with what we want to do with it.

Jill Finlayson: So being very intentional about learning, and about understanding different perspectives. Do you have a few favorite books for people who want to better understand both the challenges and the opportunities for data science and AI?

Mia Dand: Absolutely. I have a list of 100 books that I also publish, fiction and nonfiction. I love my list. I love publishing these lists. It's on my Twitter account if anybody wants to look at it-- miad.

But Virginia Eubank's Automated Inequality. I cried through that book, I'll be honest. I mean, just the lens that she provides, like-- she said even she was crying throughout the book as she was writing it-- I feel like really opens your eyes.

Safiya Noble's Algorithms of Oppression. Right now, I have Octavia Butler's book. I'm reading a lot of nonfiction, as well. So I have five or six books open right now in Libby.

But a lot of them-- I'm just looking at fiction and nonfiction books by Black women, by activists in the space because AI might seem like a new thing. In artificial intelligence, we talk about is this new-- bright new shiny object, but a lot of issues that we face and that humanity faces, our society faces, they come from the times of slavery. They come from, like, predate even-- like there are issues that are stemming up today. So having a more historical perspective, I feel, is also extremely helpful.

Jill Finlayson: I love it. We'll share the list. I would add to the list Invisible Women and Weapons of Math Destruction. Those are two of my favorites.

And I agree with you that this idea that how data can shine a light on historical inequalities, and that can help us to bring change, right? If we have the data, we have transparency, we can actually be more intentional about inclusion and fixing some of the offline problems that are out there.

So I want to thank you so much for joining us, Mia. Any last words for our audience?

Mia Dand: No. Go forth and just change the world and make it a better place. I feel like we all have the power and as we're just getting started in this journey, I do-- just don't hold back. And this was a perfect timing to end it on, because my little research assistant, my little cat, just jumped into my lap. He says, more power with you all.

So we're ending on a very, very good note. So I appreciate you inviting me. Thank you so much. It is a critical topic, and I look forward to continuing this conversation.

Jill Finlayson: And thank you, also, for your work and leadership in this area, and your willingness to create uncommon partnerships. I know you're heading off to work in Copenhagen and trying to build a global community around these issues. So really appreciate the fact that it takes people to lead and to push for the removal and lowering of these barriers, and the democratizing of the access to these roles in technology. So thank you so much.

Mia Dand: Appreciate it. Thank you so much, Jill.

Jill Finlayson: And with that, for our audience, I hope you enjoyed this latest in our long series of podcasts that we send to you every month. Please share with friends and colleagues who may be interested in taking this Future of Work journey with us. And make sure to check out extension.berkeley.edu to find a variety of data courses to keep you up to date. And to see what's coming up on EDGE In Tech, go ahead and visit edge.berkeley.edu.

Thanks so much for listening, and I'll be back next month to talk about the future of remote work. With more staff solely interacting through digital means, does the loss of face-to-face interactions mean less opportunities for networking, or more creative collaborations and impromptu hallway meetings? But does it mean realistic work-life balance? Does it promote flexibility or increase productivity?
We'll find out next time. Until then.

[MUSIC PLAYING]