Recruitment: AI software ‘could penalise candidates with disabilities’

Recruitment: AI software ‘could penalise candidates with disabilities’

HR personnel need to exercise caution when using such software as part of their activities to minimise the risk of discrimination, experts caution.

New findings suggest that generative AI might be no less biased towards those with disabilities than a traditional recruiter. (Envato Elements pic)

Whether for drafting job offers or sorting applications, artificial intelligence tools are making increasing inroads into human resources. But this software can be biased: a US study has found that generative AI can discriminate against people with disabilities, based on their resumés.

Researchers from the University of Washington discovered this after conducting an experiment in which ChatGPT-4 was asked to give an opinion on resumés enhanced with information that gave clues as to their author’s status as a disabled worker. In one case, for example, it was indicated that the job applicant had won a scholarship specifically for people with disabilities.

The researchers ran these resumés several times through ChatGPT-4, comparing them with an original document in which there was nothing to suggest that the applicant had a physical or mental disability. The aim was to determine which of these profiles was the most suitable for a research position to be filled at an American software company.

It turned out that, out of 60 attempts, OpenAI’s chatbot only considered the modified resumés to be the best match for the vacancy in 25% of cases.

“In a fair world, the enhanced resumé should be ranked first every time. I can’t think of a job where somebody who’s been recognised for their leadership skills, for example, shouldn’t be ranked ahead of someone with the same background who hasn’t,” said senior study author Jennifer Mankoff.

When the academics asked ChatGPT-4 to justify its choices, they found that the chatbot tended to perpetuate ableist stereotypes. For example, the generative AI considered that a jobseeker with depression had “additional focus on diversity, equity and inclusion (DEI), and personal challenges”, which “detract from the core technical and research-oriented aspects of the role”.

Indeed, “some of GPT’s descriptions would colour a person’s entire CV based on their disability and claimed that involvement with DEI or disability is potentially taking away from other parts of the resumé”, explained study lead author Kate Glazko.

The scientists then tried to customise ChatGPT with written instructions so as not to stigmatise disabled workers. This was successful, to some extent: the modified resumés outperformed the original in 37 out of 60 cases. Nevertheless, generative AI continued to be prejudiced against candidates with depression or autism.

“People need to be aware of the system’s biases when using AI for these real-world tasks,” Glazko noted.

The findings show that generative AI is no less biased than a traditional recruiter, despite what some might claim. That’s why the Artificial Intelligence Act – legislation that sets out a European regulatory framework to govern the use of AI – classifies software that sorts resumés among so-called “high-risk” AI systems.

Human-resource professionals must, therefore, exercise caution when using AI software as part of their activities to minimise the risk of discrimination.

Stay current - Follow FMT on WhatsApp, Google news and Telegram

Subscribe to our newsletter and get news delivered to your mailbox.