We have so far published two articles on the topic of AI compliance, one that explores the need to be in compliance within the HR industry, and the other suggesting ways to achieve compliance. As AI impacts lives and when used for HR-related decision making, such as recruiting, hiring, and candidate screening, it affects disproportionately underrepresented candidates. So, the conversations for implementing ethical use of AI in HR have recently started to ensure talent-pool diversity and inclusivity when AI is deciding who companies should hire. This article, together with the previous two will give you a better idea on why to take steps in ensuring AI compliance within your own organization. Followingly, you can read how to stay up-to-date with the ever-changing AI regulation landscape and its respective compliance checks.
Where to check for updates on AI compliance
There are many different frameworks and approaches on how to stay in compliance, depending on the values and principles of fairness and ethics, and/or the specific national AI strategies. Applicable to all industry leaders, we are sharing with you some of the sources that are worth checking for recent news.
- The official site of the European Union (Link). The EU together with its other bodies, such as the European Parliament, are frequently sharing official reports, news, and proposals regarding the use of AI within the HR and other industries
- Microsoft’s AI Fairness Checklists (Link)
- IEEE’s Ethics in action website (Link)
- Ada Lovelace Institute (Link)
- The Society for Human Resource Management (Link)
Compliance is a long-term commitment
Regulations within the AI industry constantly change and this pressures the AI builders and those who use AI in HR to commit to continuous updates and retraining of AI models. Depending on the data sets and machine learning models’ nature, every so often they need to be monitored and adapted to stay in constant compliance.
There are many different angles on how to maintain compliance, either with the proposed and ever-changing regulations or/and with the ethical and responsible principles.
Effective January 1 of 2023, New York employers will be restricted in their use of artificial intelligence (AI) tools in recruiting and hiring employees and making other employment-related decisions.
As AI systems are a product of their algorithms, they can draw biased conclusions if the data that feeds the algorithms is skewed towards only specific groups of people. To eliminate such problems, these newly proposed regulations state: “The law prohibits the use of such tools to screen a candidate or employee for an employment decision unless it has been the subject of a “bias audit” no more than one year prior to its use. A “bias audit” is defined as an impartial evaluation by an independent auditor that tests, at minimum, the tool’s disparate impact upon individuals based on their race, ethnicity, and sex. ”
Also, it is no secret that women and other minorities face unique challenges in the workforce. Even though women make up approximately 47% of today’s workforce, there are still many male-dominated industries. The United States Department of Labor estimates that although 74% of human resources managers are female, only 27% of CEOs are. That number drops to just 7% for construction managers. One contributing factor to these statistics is the unconscious gender bias in job descriptions. ( Check out our case study on AI in HR to read more)
There’s been a growing awareness in companies of the human biases that affect recruitment and management. Most recruiters now pay lip service to the idea that recruiting can be made better — fairer and more effective — if bias is eliminated from hiring. And being in compliance is one of the first steps of the process. Overall, being in compliance means investing in strategies to better understand and build trustworthy AI systems and making sure they are working as intended regularly.
To be in compliance is to think beyond the current AI model and continuously check if the AI systems function as intended under various scenarios, having the different human attributes and how they overlap with each other in mind! A way to do this is by creating and using a compliance checklist that will be integrated within the AI systems, making sure that the organization keeps up with the most recent regulatory changes and mitigates potential bias.
Check out KOSA’s Responsible AI System which offers you manual upkeep and ensures your AI system remains secure and compliant. It is seamlessly integrated with any data sources or model frames and can future-proof your AI.