top of page
Search
Writer's pictureDavid Kurtenbach

Hiring Biases with AI Screening Platforms

Updated: Nov 12, 2021

Bonnie Marcus (2021) recently authored an article for Forbes online that tackles some of the ethical issues AI brings into the hiring process. Her article "The Use Of Artificial Intelligence In Business Codifies Gendered Ageism. How Do We Fix It?" discusses the problem of reinforcing biases in AI and found that older woman are disproportionately overlooked for employment opportunities.




"The call back rate for older women compared to their younger female counterparts was significantly lower despite the fact that the only difference in the resumes was their age." (Marcus, 2021)




A lot of companies are turning toward AI platforms to help in screening applications. It saves time and money for businesses and reduces the workload needed to review all of those applications. The AI tools don't just look at keyword matching and past job history but they take a further step,


"An example is looking at applicants and predicting their performance or likelihood of attrition, or recommending edits to the job description that will make it less biased toward people earlier in their career or certain characteristics that will more likely be represented in certain genders or under-privileged groups. There is great potential for the bias to interfere with the process." (Marcus, 2021)


AI platforms that are built by a group of engineers with little diversity, instill their bias into screening platform. It creates an exclusive environment without ever really meaning to.

The problem is evident and unfairly judges older women but what can be done to counter it. Stela Lupushor, founder of Reframe Work, said that, "Companies must examine all HR processes and understand the disparate impact and exclusion that might happen as a result of poorly designed processes or poorly executed policies." (Marcus, 2021)


Lupushor points out that companies need to involve a diverse group of people in developing hiring practices and policies. Those companies creating AI screening software need to build in inclusivity by including all types of people in the development. Marcus touches on a core concept to AI, a model is only as good as the data that used to build it. This same issue was a hot topic when predictive policing platforms started to become more widely used. If a training set is based on biased data, it will only produce biased results. This aspect needs to be considered when developing AI tools, if programmers do not take steps to counter historical bias in data they will only exacerbate the problem and create models that exemplify the same prejudices. Marcus's article not only does a good job of addressing and explaining the issue but it also presents solutions in an easily digestible way.

46 views0 comments

Comments


bottom of page