The Future of AI in HR: Being in compliance

KOSA AI
5 min readApr 28, 2022

--

AI Compliance Checklist

Assuming HR companies that use AI for hiring, are aware that they need to take precautions and measures towards making AI responsible and compliant with AI regulation, what would the next steps be for them?

Continuing from our previous article on why HR companies employing AI technology should be compliant with the latest regulations, we will now dive deeper into how exactly to achieve compliance. Since pedigree bias is one of the major risk factors concerning AI systems used in hiring, the proposed AI regulations deal with frameworks suggesting mechanisms to identify and mitigate bias and discuss challenges and opportunities to increase fairness in the tech-HR industry.

According to the latest research by SHRM (the Society for Human Resource Management), “1 in 4 organizations report using automation or artificial intelligence (AI) to support HR-related activities, including recruitment and hiring.”, a survey done with a total of 1,688 members participated. Hence, there is no doubt we will keep seeing new AI models being developed to optimize and improve the outcome of HR interventions. On the other hand, this research also points out that “while 30% say the use of automation or AI improves their ability to reduce potential bias in hiring decisions, 46% would like to see more information or resources on how to identify any potential bias when using these tools.” As the market grows, the need to form an internal governance structure that will tackle the compliance and bias mitigation for the HR AI systems grows as well. More and more companies are increasing their efforts and adopting new tools to minimize bias and make their machine learning models explainable. So, if you are an AI provider company and/or your HR-related activities use AI models for any decision-making, your time to act on compliance is right now.

Companies can approach this from two angles:

(i) outsource compliance to third-party model monitoring systems, that will take care of all the proposed regulatory frameworks and ensure the models are in check before deployment;

(ii) build an internal team to validate, monitor, and analyze machine learning models for compliance.

Nonetheless, all companies need to establish technical expertise to accurately operationalize AI governance. This solution may sound simple, but the implementation is an ongoing process that is lengthy and expensive. It includes constant training, validating, analyzing, and monitoring — both for compliance with industry-specific regulations and for ethical and responsible AI decisions overall.

The compliance angle

As machine learning methods are increasingly used in hiring and recruiting settings, there is growing concern that they may reflect and perpetuate systemic inequalities and biases. The first step for complying is monitoring training and testing datasets, to stop bias inferences at the source. You can use this information to intervene as required.

Check out KOSA AI’s Responsible AI tool and customize a checklist tailored to your needs.

Our case study “Increase talent diversity through gender bias mitigation in job postings” proposes a potential solution to algorithmic gender bias in AI HR using an automated responsible AI system. We found out that in specific job ads, there is an average of 9 gendered words per job post, and on average more masculine words than feminine words, on the high end of the spectrum, every 80 words in a job post are biased towards male. To mitigate this risk and stay in compliance with the proposed regulatory and governance frameworks, we performed the Neutralizing gendered words and improving readability in the text data in KOSA’s Automated Responsible AI System (ARAIS).

Based on the research of Professor Danielle Gaucher and Professor Justin Friesen from the University of Waterloo in Evidence That Gendered Wording in Job Advertisements Exists and Sustains Gender Inequality, we have developed technology to automatically neutralize gendered words in a given body of text. Differences in wording, if they do in fact exist, may exert important effects on individual level judgments that facilitate the maintenance of inequality as gendered wordings cue a sense of belonging in readers and have been shown to deter applicants.

Text Data Editor tool

Gendered words, either feminine or masculine, are highlighted in the software. Given the context of each text body, our software can recommend neutral alternatives that will increase the inclusiveness of the text.
The emergence of gendered words in job advertisements is the result of an inference-based perceptual process whereby gendered language emerges within advertisements depending on which gender predominates. There is ample evidence to suggest that belongingness — feeling that one fits in with others within a particular domain — affects people’s achievement motivation specifically and engagement within a domain more generally, and that it can be signaled by cues in the environment.

No matter the trend, keep AI human

As AI is starting to become a household name in the hiring process for many businesses, more and more initiatives are taken to draw attention to the importance of tackling AI bias and making AI more human. For example, a recent paper published by IEEE is analyzing the different practices for establishing fairness in AI-based candidate screening due to the growing concerns that AI biases can adversely impact hiring decisions. The compliance angle can start up the process of building better, fairer, and responsible AI, but it should not be just a performative act to meet the proposed regulations. It should be an intersectional, continuous, and multi-stakeholder work that involves: evaluating and mitigating bias against different ages, genders, ethnicities, or any other sensitive attribute; understanding the human impact of the model; being in compliance with industry-specific regulations; and constant model monitoring before and after deployment.

Conclusion

Moving forward, artificial intelligence will play an essential role in candidate screening, recruiting, and hiring. If the bias is left unattended, it will persist and it will resurface in new contexts, undermining the importance of compliance regulations. As such, it is crucial to build and implement fair AI models that minimize bias and promote equality. This includes developing and distributing best practices in bias detection, as well as creating targeted programs to mitigate these issues and start ticking the boxes for AI compliance.

--

--

KOSA AI
KOSA AI

Written by KOSA AI

Making technology more inclusive of all ages, genders, and races.

No responses yet