HR Jobs At Risk? Can AI Assess And Shortlist Job Applicants?

by | Mar 7, 2024

What does the future of Human Resources (HR) look like if a robot evaluates your resume letter? According to many who watch the fast uptake of artificial intelligence in human resources and recruiting firms, this is indeed the case. It seems sense that the best resumes will now be selected by an impartially calculating robot rather than a potentially biassed person. But is that true in a legal sense?

There are several software available for use in application screening. Recognizing intriguing resumes is popular, but it’s also getting more and more normal to monitor video calls to detect whether someone is acting tensely, evasively, or dishonest.

All of this software utilises statistics in its operation, which enables it to identify common and distributive properties that enable new programmes to be sorted into the appropriate containers.

Objectivity is a significant benefit for many businesses. By doing this, you can be sure that your HR team does not unintentionally exclude women from job opportunities or use other discriminatory selection factors. But how objective is such software really? After all, the information used to feed such a system originates from past usage. Additionally, there could be discriminating standards in between.

A well-known example is the HR robot at Amazon, which only recruited males since, for the first fifteen years of its existence, the firm only employed men.

General Data Protection Regulation (GDPR), which forbids the use of Artifical Intelligence for candidate selection and screening process.

The Argument put forward by big companies is in favour of using AI tools for job recruitment process. As per one such biog corporate firm, it is OK to use Artiifical Intelligence for HR tasks since the recruiter will use it as a tool and then use his or her own discretion. Of course, it is hard to show that the recruiter will mostly depend on AI in practise.

European countries are the ones introducing new, stronger laws like the AI Act. Because it encompasses a wide spectrum of algorithms and computerised judgement or decision-making affecting people, the label “science fiction” can be a little misleading. The same is true for software applications that make use of conventional decision trees, fixed stages, and alternatives.

The effects that a system like this may have on individuals should come first. If it is insignificant, the law permits its usage without further restrictions. If the hazards are too great, then using it is just forbidden.

HR Systems Are In Ambiguity:

AI poses a significant risk, but not an unacceptable one, for HR systems. To ensure that users can comprehend how the system arrived to its conclusion, there are stringent standards for this, including detailed design documentation with risk management and built-in explanation modules. Additionally, written evidence of accuracy is required. O, and faults herein are the responsibility of both the provider and the user.

As a result, such systems are currently acceptable as an assistance. The applicant only has one handle, and under GDPR, a corporation is required to justify their decision to reject a canditate. If AI is implemented for Job screening and assessment process, the big corporations rejection reason will be as follows : “Our AI believed that your experience and abilities were insufficient.” Clear, but not pleasant to hear.