We are in the midst of some very significant changes in our workforce, jobs and management.
In our recent event “The New HR,” our panelists discussed answers to pressing questions such as:
- "Is virtual the new workplace?"
- “Why should we be having conversations about race and social justice in the workplace, and how do you do that?
- “What role can automation and AI play in making HR more efficient, effective and equitable?”
Valerie Williams, Founder and Managing Partner at Converge Firm
Valerie Williams is a diversity champion and community builder with more than 14 years of experience advocating for fairness and equity across industries. Most recently, she served as Global Head of Inclusion & Diversity (I&D) at Stripe, leading both internal diversity efforts and designing strategic partnerships emphasizing global economic access through entrepreneurship. Prior to Stripe, Valerie helped create the internal Diversity and Belonging (D&B) programs at AirBnB and advised on strategy for external programs aimed at preventing discrimination on the platform and promoting access to communities of color. She has extensive executive, business and technical recruiting experience at Google, AirBnB and Russell Reynolds, as well as four years of supply chain and operations experience at Hewlett Packard.
John O'Duinn, Adviser and Mentor at Release Mechanix, LLC
Author, Distributed Teams: The Art and Practice of Working Together While Physically Apart
John has written code and led teams in organizations ranging from 4-person startups to non-profits to multinationals, including at the U.S. Government as part of the U.S. Digital Service in the Obama White House. In addition to technology, he loves growing a culture where diverse groups of humans work well together in a distributed global workforce. John has worked in distributed teams for 27 years, led distributed teams for 14 years and coached/mentored fast-growing distributed companies for six years. In 2018, he also helped write the State of Vermont's "Remote Worker" law.
Brandie Nonnecke, Ph.D., Founding Directo at UC Berkeley's CITRIS Policy Lab
Fellow, Harvard Carr Center for Human Rights Policy
Brandie has served as a fellow at the Aspen Institute’s Tech Policy Hub and at the World Economic Forum on the Council on the Future of the Digital Economy and Society. She was named a 2018 RightsCon Young Leader in Human Rights in Tech and received the 2019 Emerging Scholar Award at the 15th International Common Ground Conference on Technology, Knowledge and Society. Her research has been featured in Wired, NPR, BBC News, MIT Technology Review, PC Mag, Buzzfeed News, Fortune, Mashable and the Stanford Social Innovation Review. Her research publications and op-eds are available at nonnecke.com. Brandie researches human rights at the intersection of law, policy and emerging technologies, with her current work focusing on issues of fairness and accountability in AI and computational propaganda and digital harassment campaigns on Twitter. She has published research on algorithmic-based decision-making for public service provision and outlined recommendations for how to better ensure applications of AI support equity and fairness. She also investigates how information and communication technologies (ICTs) can support civic participation, improve governance and accountability, and foster economic and social development.Her research on the impacts of collaborative filtering for development program evaluation received the 2015 IEEE Global Humanitarian Tech Best Paper Award and was featured in the Stanford Social Innovation Review.
Moderator: Jill Finlayson, Director, Women in Technology Initiative, University of California
Jill Finlayson is Director of the Women in Technology Initiative at UC, co-sponsored by CITRIS and the Berkeley Engineering at UC Berkeley, which supports research and initiatives to promote the equitable participation of women in the tech fields. Prior to Berkeley, she led mentorship and developed incubator and accelerator programs for Singularity University Ventures, whose mission is to increase the number of impact-focused tech startups. Finlayson ran the Toys category for five years at eBay, and authored Fundraising on eBay. She managed a community of social entrepreneurs at the Skoll Foundation; led marketing at various startups; and consulted for the World Bank, Gates Foundation and Ford Foundation. Finlayson also judges and coaches founders of global startups for several startup competitions. She has mentored for TechWomen.org for the past nine years and participated in delegation trips.
On Managing Distributed Teams
COVID-19 has dramatically changed how and where people work. Is this a one-off change? Will we return to our workplaces?
John: It's certainly accelerated the change. But companies were already trending toward this. People's tenure at jobs has gotten shorter, and that's been going on for a while. A lot of the companies I was working with before COVID were concerned about how they would deal with starting an office and then realized they didn't need one.
And even if they try to go back, is that going to even be possible?
John: Regardless of what somebody says in terms of health rules, when would you be comfortable going through crowded public transit to get into a crowded elevator to go to an open-plan office with recycled air conditioning? And whenever that is, you can then figure out how long it's going to be before people are comfortable with going back to an office. And if you require it, then you may find you have a retention problem.
Be aware that this is a new and disruptive time for everybody.
What are two ways that HR professionals could start to think about and change the practices around managing distributed teams?
John: The first thing you can do is actually very simple: Be aware that this is a new and disruptive time for everybody.
I've had some conversations where staff are complaining about not having the proper equipment at home. Their company-paid-for equipment is sitting at a desk in an office gathering dust because no one's allowed in the building. So have people sign out the equipment to help people do their jobs.
Number two, if you're hiring, actually say “remote welcome” in the job description. Just by changing that, I typically see well over 10 times the number of applicants.
In the past, companies would often avoid taking a position on social or political issues. They were perceived to be not-work-related. Now there's an expectation that companies are good citizens and equitable and caring employers. How has this shift changed HR and the skills leaders need?
Valerie: There is a need to have HR leaders—all executives—develop these skills in a way that was previously not required. Employees are holding leaders responsible, and they want to see you take a clear stand. It's not enough to just have a statement or write a check and give a donation.
If you're going to have these bold statements, how can you make sure you're holding yourself accountable to your own systems and look at your own selves and your own structures? How do you make sure that you're not perpetuating some of these dynamics that are playing out in society?
We also saw Black Lives Matter appear all around the world. We have teams that are working in different countries.
Valerie: The concept of oppression and racism is not just in the U.S. Our clients have employees outside of the U.S., and so having inclusive leadership skills is a global mindset.
And a lot of the work is around helping people understand the importance of having cultural competencies to be able to meet the growing demands. How do we meet people where they are, in terms of understanding these concepts that might require adjustments of conversations or training, but also applying tactics that are locally relevant and are also honoring the fact that oppression is global.
What we found around Black Lives Matter is that black employees are saying, “Hey, this is what you should be doing.” But leaders are still hesitant to extend their privilege and to allow their black employees to lead at this time.
And you have to. This has to be a co-creation process. And it has to be something that's not just top-down, but is also driven from employees and by employees. So you have to give that power away in order to start to dismantle some of these inequities.
You need to take a close examination of your organizational structures—systems, hiring practices, promotional practices—to see how you are perpetuating racism in your workplace.
What are two things that leaders or HR professionals should be doing? Where can they start?
Valerie: The first place to start is self-examination, in terms of understanding your own individual journey around these concepts and committing to your individual journey to become an inclusive leader. You're staying up-to-date on what's happening around the world. And you're actively trying to maintain this mindset within your workplace.
Then as HR leaders, you need to examine your systems. In my class, I feature Aubrey Blanche, the director of equitable design at Culture Amp. Her job is to look at the systems in place that might perpetuate bias.
HR leaders and executives and all leaders within organizations: You need to take a close examination of your organizational structures—systems, hiring practices, promotional practices—to see how you are perpetuating racism in your workplace. How are you perpetuating these higher-level inequities in your workplace?
With the intention of further diversifying the workforce in many companies, where do we find talent that fits? Are there recommended sources beyond the well-known social network platforms like LinkedIn and Indeed?
Valerie: I had a little bit of a reaction to the word "fits.” I think we have to move a little bit away from cultural fit and having people fit into an organization, and move toward creating an environment where everyone can be their authentic and true selves. And if you're looking for someone that fits into your organization, you're probably creating an environment that honors a certain type of leadership style. But what you want to do is create the environment where everyone can feel like they belong.
But how do you do that? I think you can think creatively now. Who are the strategic partners who could make sense for you?
One example: For people looking for black developers, there is an organization called /dev/color. You can develop long-term relationships with these organizations. You have to invest in relationship-building with communities that are not represented in your workplace in order to gain the efficiencies of being a part of that network.
Most executives are motivated to do the right thing and be inclusive, but they still make mistakes. How do we support capacity-building?
Valerie: We've been seeing success with courageous conversations. So getting away from training and having more frequent opportunities for ongoing learning by intentionally creating these moments.
Brandie: There's a book by Scott Page called The Difference: How the Power of Diversity Creates Better Groups, Firms, Schools, and Societies, where they found that diverse teams are better equipped to solve complex problems than teams that have a much higher cumulative IQ. So diversity is critical.
On HR and AI
What is the role of AI in HR?
Brandie: We're seeing increased implementation of AI to try to solve these issues, because AI-enabled HR tools can create significant gains in efficiency, effectiveness and equity. In hiring, we've discussed the shifting labor trends where individuals don't stay in a position for more than two and a half years. We're seeing large firms being inundated with applications. So how do you efficiently and effectively sift through those applications to identify candidates with the right skills and fit?
Machine-learning power tools can be developed to review and prioritize applications for interviews. However, if these tools are not carefully designed and implemented, they risk reinforcing bias and discrimination.
But it is possible to develop HR tools that don't perpetuate bias and discrimination—or try to mitigate it to the greatest extent possible. And one example is pymetrics. I think it's a really good example of a company that is trying to develop AI-enabled HR tools with diversity and inclusion at the core of the technology. And they take every step possible to mitigate bias from creeping into its tool throughout the entire lifecycle.
Jill: At the same time, unconscious bias isn't doing a great job. The real-world processes are flawed, and it's really hard to shift people's behavior. But you can shift how an algorithm works. So with the correct testing and auditing, we could be looking at something that could be very, very helpful.
Brandie: Textio helps you identify gendered terms and language in your job postings so you can correct those words before you put your job application online.
Jill: And these companies are doing some of the research for us on ways to make job descriptions more equitable. If somebody has five years of cybersecurity experience versus four years of cybersecurity experience, are you really going to count them out at four years? And I think the technology can nudge people who are writing these descriptions to achieve their goal, which is to hire the best candidate for the job.
Brandie, when we look at ethics and AI, we're not the only ones developing it. What are you seeing?
The European Union is clearly a leader in setting guidelines on ethical technology development, especially around data protection and privacy with the EU's general data-protection regulation. The EU has singled out AI-enabled hiring as a high-risk area for bias and discrimination. And they're currently considering it for some oversight.
There must be a conscious effort to create an inclusive environment for considering and mitigating the harms of these tools.
If HR staff are going to develop AI in their company, what should they be doing? How do they even approach this topic if they're not well-versed in how AI works?
Brandie: In my seminar, I go through two activities that companies can implement into their development or procurement processes to better ensure the mitigation of unintended negative consequences of AI-enabled HR tools. Both of the processes are grounded in this understanding that there are two fundamental principles that leaders and HR professionals should follow in order to mitigate harm.
First, there must be a conscious effort to create an inclusive environment for considering and mitigating the harms of these tools. For example, creating diverse teams to develop and test these tools, enabling users or those affected to report glitches or inconsistencies or red flags of discrimination.
Second, there must be a commitment to operationalize ethical AI principles.
There are many ethical AI principles. But I'll highlight the core ones, which are fairness, accountability and transparency, and operationalizing these into the technical design and governance around these tools. And you really need buy-in from the very top to ensure that you're implementing these governance processes throughout the entire development of those AI-enabled HR tools.
AI is focused on averages and not the individual. How could AI capture intersectionality?
Brandie: How do you capture the edges of that bell curve? I think it's evaluating your tool with a consciousness that there are people on the edges who are outside of the norm of the distribution. And you'll often see this with tools that have been trained on data sets that are not inclusive of the entire population of people who are applying for a job, especially if that profession tended to have, perhaps, primarily white male applicants in the past.
You need to make sure that the model is tweaked to understand that there are edge cases and actually building in a way to drive those edge cases to the foreground.
What's the role of AI in balancing some of the inadequate inequities and mitigating bias in the data sets? What about potential data-security issues with AI platforms used in HR?
Brandie: There is some oversight from legislation. Illinois passed a law called the Illinois Artificial Intelligence Video Interview Act, knowing that AI is increasingly being used in video interviews to actually provide assessments of people's competency or enthusiasm for a job based on facial recognition and affect recognition.
This is obviously raising some significant bias and discrimination issues because we know that facial-recognition systems have a propensity to be biased against darker skin tones, and especially black women. So this is very problematic if they're using these tools to assess black women candidates for a position.
In that law, it does say that the videos that are collected cannot be shared with third parties. It also gives rights to the individual who had their video recorded to request that the video be deleted.
Are there some resources for people to digest and understand HR and AI?
Brandie: The CITRIS Policy Lab has compiled a list of resources around inclusive AI. Also, check out the seminar that I've developed. It's a great primer on the risks and opportunities of using AI in HR work.
Jill: Our colleagues at the Center for Equity, Gender and Leadership at the Haas Business School released a playbook on mitigating bias in AI. It speaks to a lot of the best practices and questions that you should be asking.
Watch the entire video panel.
Interested in learning from these HR professionals?