AI Recruiting Tools For Companies Like LinkedIn Have Racial Bias

Tyler Cross
Tyler Cross Senior Writer
Published on: November 22, 2024
Tyler Cross Tyler Cross
Published on: November 22, 2024 Senior Writer

AI tools have been adopted by companies around the world, including job websites like LinkedIn, but researchers have recently found that these AI-powered recruitment tools have a racial bias.

University of Washington researchers tested eight language learning models (LLMs), including two of OpenAI’s popular ChatGPT models. They tested nearly 2,000 conversations using the AI model’s default settings, without using any special prompts to predispose the AI model toward anything.

The tests focused on four professions, software developer, nurse, teacher, and doctor. They used a variety of mock applicants and tested the AI software responses to each applicant. They tested it using Indian applicants with features of various Indian castes to see how it would react to individuals from different races and castes.

Measuring harm levels can be subjective, so researchers created a system to address what they mean by harm.

“These studies typically investigate “harm” as a singular dimension, ignoring the various and subtle forms in which harms manifest. To address this gap, we introduce the Covert Harms and Social Threats (CHAST), a set of seven metrics grounded in social science literature,” the published paper reads.

Overall, they found that 69 percent of their conversations about castes and 48 percent of the overall conversations produced harmful messages or content, primarily centered around an individual’s race.

The data proves that AI tools can have a strong racial bias and rely on misinformation and stereotypes when making decisions. This means that companies like LinkedIn need to use them carefully and avoid accidental racial profiling during the hiring process.

“Our hope is that findings like these can inform policy,” said co-lead author Hayoung Jung from the Paul G. Allen School of Computer Science and Engineering. “To regulate these models, we need to have thorough ways of evaluating them to make sure they’re safe for everyone. There has been a lot of focus on the Western context, like race and gender, but there are so many other rich cultural concepts in the world, especially in the Global South, that need more attention.”

About the Author
Tyler Cross
Tyler Cross
Senior Writer
Published on: November 22, 2024

About the Author

Tyler is a writer at SafetyDetectives with a passion for researching all things tech and cybersecurity. Prior to joining the SafetyDetectives team, he worked with cybersecurity products hands-on for more than five years, including password managers, antiviruses, and VPNs and learned everything about their use cases and function. When he isn't working as a "SafetyDetective", he enjoys studying history, researching investment opportunities, writing novels, and playing Dungeons and Dragons with friends.

Leave a Comment