Breaking down the EU AI regulations

KOSA AI
6 min readOct 19, 2021
This blog post is part of a border series called ‘Breaking down the AI regulations’ laying out the foundations of why regulations are present and important within the AI industry. The next articles will cover the EU AI regulations with in-depth analysis.
This blog post is part of a border series called ‘Breaking down the AI regulations’ laying out the foundations of why regulations are present and important within the AI industry. The next articles will cover the EU AI regulations with in-depth analysis. Read the first article here.

The global research by McKinsey shows the significance of companies intending to invest heavily in AI in the upcoming years, with 43% of them having a clearly defined AI vision and strategy. In parallel, there are initiatives for regulations so companies can make some progress in acknowledging and mitigating AI risk, such as security, AI bias, AI explainability, and other equity and fairness risks. In our previous article from our ‘Breaking down the AI regulations’, we touched upon the insights into the AI world of regulations and the different aspects, points, laws, and rules that need to be addressed and underlined when it comes to the use or development of AI systems in general. This article, as part of the series, is going to introduce the EU AI regulations in more depth and what can the companies do to be in compliance and build AI products for the future.

EU writing the AI rules- What’s it all about?

The EU regulations on AI could be found in the combination of the European Commission’s AI proposal published on April 21, 2021, and the White Paper ‘On Artificial Intelligence — A European approach to excellence and trust’ published on February 2, 2021.

The proposed AI regulations will be directly applicable in EU countries once they enter into force. Additionally, irrespective of the location of the AI provider, the EU AI regulations cover all AI systems that are somehow connected to the EU. More precisely, if your AI system is placed or put into service in the EU; if the users of your AI system are located in the EU; or/and your AI system is located outside the EU but the output is used in the EU then you most certainly need to take into account the EU AI regulations.

Another highlight is that the EU prohibits AI systems that are considered a clear threat to the safety, livelihoods, and rights of individuals and violate the EU’s values and fundamental rights. This includes AI systems or applications that:

  • manipulate human behavior (e.g. toys using voice assistance encouraging dangerous behavior of minors),
  • exploit any of the vulnerabilities of a specific group of individuals such as age, physical or mental disability, etc.
  • and systems that allow ‘social scoring’ by governments, specifically, using AI systems for the evaluation or classification of the trustworthiness of individuals based on their social behavior or known or predicted personal or personality characteristics.

Additionally, it is unavoidably to be mentioned that within these regulations, the data governance rule set out allows providers of AI systems to use data concerning sensitive properties such as race, gender, and ethnicity to ensure “bias monitoring, detection and correction” as EU impose penalties of up to EUR 30 million if a company develops a prohibited AI or fails to put in place a compliant data governance program. How can the companies of AI creations make sure that they will follow these regulations and what these mean for them?

Here comes the so-called “risk-based approach” where the EU regulation divides AI systems into three risk categories: unacceptable-risk AI systems, high-risk AI systems, and limited-and-minimal-risk AI systems. Organizations can use this framework as a starting point in developing their internal strategy. AI systems with unacceptable risk will be completely banned. Whereas, the high-risk AI providers will have to implement data governance and management practices ensuring that data used to train the AI does not contain any biased, incomplete, wrongful data sets.

Allen & Overy’s key findings point out the AI systems designated by the European Commission that are considered being high-risk. The below table includes a selection of the AI systems that are most relevant for the private sector:

AI systems designated by the European Commission’s AI regulation that are considered being high-risk.
AI systems designated by the European Commission’s AI regulation that are considered being high-risk. Source: Allen & Overy

What requirements does the regulation place on organizations using or providing AI systems?

The data sets that are used to train AI may reflect socially constructed biases, or contain inaccuracies, errors, and mistakes. To minimize these risks of AI, companies must make sure that their AI systems and algorithms meet the quality requirements including:

The data sets that are used to train AI may reflect socially constructed biases, or contain inaccuracies, errors, and mistakes. To minimize these risks of AI, companies must make sure that their AI systems and algorithms meet the quality requirements including:

  1. Organizations must first understand why their algorithm makes particular decisions
  • when data is prepared it must go through processes such as labeling, cleaning, etc. to make sure that the input data is well prepared
  • input data must be examined against any possible biases and must be free of errors

2. Provide information about the logic behind the algorithm and automated decision making to data subjects

  • high risk AI providers must prepare technical documentation before putting the AI system on the market
  • if asked, provide relevant information to the users of the AI systems which must have sufficient information to be able to interpret the AI system’s outputs (decisions)

3. Be able to change the incorrect input data

  • after the AI system is put on the market, it still has to be monitored on an ongoing basis
  • the provider should find out if there is a serious incident or malfunctioning in the AI system which infringes fundamental human rights. This must be reported to the relevant authorities
EU AI regulations’ requirements on companies
EU AI regulations’ requirements on companies. Source: McKinsey & Company

Hence, the companies, to be in compliance with the EU AI regulations, will have to test their products or services which are built on top of AI to ensure that they follow the requirements and the risk criteria for prohibition if AI systems are considered a clear threat to the safety, livelihoods and rights of individuals and violate the EU’s values and fundamental rights.

They will have to establish an internal risk management system that is managed throughout the entire lifecycle of their AI system. Such system should identify risks associated with their AI and adopt suitable measured to address any risk by following the provided above-mentioned requirements. In essence, all companies related to the development, production and supply of AI systems must examine the data sets thoroughly to make sure that there are no input biases, prepare the data by cleaning it, efficiently labeling it, and address any gaps or shortcomings. Followingly, companies must have in place an AI Explainability method that will make the decision-making transparent, and an ongoing monitoring system continuously examining how the AI system functions.

A promise of creating a truly people- centered AI regulation?!

The EU AI regulations are a wake-up call to make emerging AI technologies responsible and trigger organizations to establish AI governance and compliance. However, all the above-mentioned regulative initiatives leave organizations to deal with internally and on their own, lacking the directive that documents could be accessible and reviewed by the general public. In other words, the regulations lack a focus on those affected by AI systems, missing any general requirement to inform people who are subjected to algorithmic assessments. For instance, let’s take AI bias. The regulation states that AI systems must contain AI bias assessment but there is no clear act that it needs to be provided to users, the public, or those potentially affected by discriminatory algorithms right away. It can be available only upon request.

So, is this going to be a later issue on the agenda of the EU AI regulators? Maybe, but, it is a great start for mandating assessments and disclosures of AI systems’ accuracy and responsibility.

Even though the EU regulations are not yet in force officially they are a ticking clock to companies to have an AI Governance plan on the table. To read more about how this can be done and how to kick start your own AI governance visit KOSA AI’s Community Launch page and follow our ‘Breaking down the AI regulations’ series. The next article will include an analysis of how the EU AI regulations affect the healthcare industry.

This blog post is part of a border series called ‘Breaking down the AI regulations’ laying out the foundations of why regulations are present and important within the AI industry. The next articles will cover the EU AI regulations in the healthcare sector with in-depth analysis. Read the first article here.

Related sources:

--

--

KOSA AI

Making technology more inclusive of all ages, genders, and races.