The Future of AI in HR: Preparing for Compliance

KOSA AI
5 min readApr 14, 2022

--

Max Gruber / Better Images of AI / Clickworker Abyss / CC-BY 4.0

Artificial intelligence is becoming a household name in the hiring process for many businesses. The technology has already made its way through every facet of it, from simply scheduling interviews, to screening candidates and making hiring decisions. To this day, AI has had a considerable impact on the whole sector. When AI is deciding on which candidate to hire, the first step in the process is filtering the sheer amounts of received applications for a specific job, leading to the issue of AI bias and inequality in the workplace. Witnessing all of this, the conversation has shifted to yet another domain, compliance.

Similar to our previous discussion about compliance in the healthcare industry, if we want to see AI technologies operate to the fullest, if we want AI to truly revolutionize the field, we need to make it compliant with the proposed regulations of the HR industry.

Current AI Regulations for HR

There have been plenty of cases where HR AI systems for screening and selecting job candidates have had bias in their algorithmic decisions.

It’s no secret that women and other minorities face unique challenges in the workforce. Even though women make up approximately 47% of today’s workforce, there are still many male-dominated industries. The United States Department of Labor estimates that although 74% of human resources managers are female, only 27% of CEOs are. That number drops to just 7% for construction managers. One contributing factor to these statistics is the unconscious gender bias in job descriptions. A specific example is Amazon’s AI system for hiring that picked out resumes most similar to the company’s most successful job applicants. Amazon eventually stopped using this algorithm to decide what candidates to be interviewed because most of the successful job applicants in the past were male, leading to bias in the algorithm’s decisions, specifically against women.

Another example of AI bias within the HR and hiring processes is biased language in job descriptions that discourages candidates from applying and causes a company to miss out on the benefits of a diverse workforce.

AI in HR

In the job ads published by the city of Los Angeles, it was found an average of 9 gendered words per job post, and on average more masculine words than feminine words, on the high end of the spectrum, every 80 words in a job post are biased towards males. The emergence of gendered words in job advertisements is the result of an inference-based perceptual process whereby gendered language emerges within advertisements depending on which gender predominates. There is ample evidence to suggest that belongingness — feeling that one fits in with others within a particular domain — affects people’s achievement motivation specifically and engagement within a domain more generally, and that it can be signaled by cues in the environment.
(Read more in our Case study: Increase talent diversity through gender bias mitigation in job postings)

To make sure that the AI bias problem is mitigated and establish AI ethics, equality, and diversity within the hiring processes, companies that use AI systems for such decision-making should make sure that their software is in compliance with the proposed regulations. Currently, there are regulatory proposals by different states on how companies will be required to disclose to job applicants, how AI technology was used in the hiring or promotion process, and must also allow candidates to request alternative evaluative approaches like having a human process their application. New York City’s government recently passed a law that will require any company selling human resources software using AI to complete third-party audits to verify that such technology doesn’t show bias against any gender, ethnicity, or race.

Similarly, The European Union (EU) proposed in April 2021 to regulate the use of AI in HR, which could alter how companies recruit and hire workers for jobs across Europe.

The two main EU regulatory frameworks concerning HR companies are the General Data Protection Regulation (GDPR), and specific EU AI regulations. As AI systems in hiring and recruiting are considered high risk, the so-far EU regulations suggest that documentation containing information about the algorithms and the logic of algorithms used in the AI system is crucial, so there are no unfair outcomes, unacceptable use of personal data, and elimination of bias built into the algorithms. Furthermore, the EU’s regulatory proposals may impact recruitment and selection systems, such as advertising for job vacancies, screening or filtering applications, and evaluating candidates with interviews and tests.

Compliance as the next step for AI in HR

Bearing all this in mind, it is crucial to highlight that developments in AI will likely accelerate exponentially over the next decade. Statistics show that “96% of senior HR professionals believe AI has the potential to greatly enhance talent acquisition and retention” and “55% of HR managers see evidence of AI becoming a regular part of HR within the next five years”. Also, there are more and more initiatives of joint collaboration among major businesses and institutions in the HR industry committing to create safeguards against AI bias for the workforce.

As organizations figure out how to scale up AI-led innovations, they should also start paying attention to compliance requirements and make recruiting better, fairer, and more effective. One of the approaches is to have consistency and accuracy in the AI system’s data and build responsible AI that mitigates the human bias affecting the decision-making process for hiring.

Conclusion

AI hiring software solution providers need to be in compliance with proposed industry regulations to gain trust. AI developments in HR must be held within the principles of being designed to give equal chance and access to everyone, be transparent and produce explainable outputs. In parallel, being AI compliant means understanding the “why” behind every system’s decision, and continuously uprooting discrimination and biased decisions.

--

--

KOSA AI
KOSA AI

Written by KOSA AI

Making technology more inclusive of all ages, genders, and races.

No responses yet