Breaking down the AI regulations
Companies, governments, and other institutions started embedding artificial intelligence into their products, services, processes, and decision-making to a great extent. This opened great questions on how the data is used by their systems and if any, what are the implications. The answers become even more serious if we take the complex, evolving algorithms that propose health diagnosis, approve a loan, or even autonomously drive a car. Now more than ever, it is essential to develop AI tools that can be trusted and are responsible as AI has and will have wide-ranging economic impacts across manufacturing, transportation, health, education, and many other sectors. This can be done by the development of public sector policies and laws for promoting and regulating AI. It is quite a recent topic among regulators globally as between 2016 and 2020 a wave of AI regulations and guidelines were published in order to maintain social control over the use of algorithms in our everyday lives. In order for these regulations to be effective, there are a lot of different aspects, points, laws, and rules that need to be addressed and underlined. This article explains the overview of the AI regulations out there and is the first one of a border series called ‘Breaking down the AI regulations’. This series is going to do a break down of the EU regulations, laying out the foundations of why regulations are present and important within the AI industry.
Insight into the AI world of regulations
AI regulations propose that there should be a legal framework for ensuring that AI is well researched and developed. To be more specific, the usage of certain AI technologies should be regulated, or at the very least monitored, to prevent the misuse or abuse of tech power. Since, if highly advanced and complex AI systems are left uncontrolled and unsupervised, they stand the risk of deviating from desirable behavior and performing tasks in unethical ways.
Only recently, there has been a burning debate on the news as the UK government announced that they may revoke Article 22 of the EU GDPR, a consequence that is absorbed into UK law post-Brexit. As Article 22 of the EU GDPR protects individuals from automated decision-making that has legal or similarly significant effects on them and it guarantees that people can seek a human review of an algorithmic decision, by revoking it, it will result in algorithmic inequalities and AI bias, affecting minorities the most.
Similarly, civil rights organizations are also calling out regulatory bodies to recommend regular civil rights and equity measures to ensure that AI providers are transparent about their AI systems and the impact they have on people, communities of color, and all other underrepresented groups.
Here is an overview of how the AI regulations are handled by regulators worldwide and where different countries stand on the spectrum.
According to research done by OECD, around 60 countries have developed AI policies including developing AI ethical principles, investing in AI R&D, preparing the workforce for opportunities as well as disruptions from AI, and assessing the need for AI regulation and standards.
The EU seems to be the leading example for proposing regulatory measures with the publishing of its white paper and the GDPR. The included regulations set out prohibitions for certain AI systems. Namely, AI systems will be prohibited if they are considered a clear threat to the safety, livelihoods, and rights of individuals and violate the EU’s values and fundamental rights. It specifically addresses how businesses should provide transparency around automated decision-making.
Different national governments within Europe as well as organizations such as the OECD, business, academia, and civil society, have developed ethical guidelines for AI. For instance, the UK, Germany, France, and the Netherlands are releasing reports, frameworks, and guidelines to the measures needed to build trust and enable innovation in AI and data-driven technologies.
The US government recently proposed AI ethical guidelines and sector-specific ethical principles from the Department of Defense addressing AI issues such as algorithmic accountability, facial recognition, privacy and algorithmic profiling, and transparency.
Similar to the EU, a couple of states have passed laws for providing transparency about how the data may be used in technologies such as AI.
While it is assumed that the United States and EU are the world leaders in AI, China is catching up fast as its main initiatives are seeking to erect barriers to the free and open development of AI. China has developed ambitious AI policies that serve as a roadmap and plan to become the global leader in AI by 2030. However, recently Chinese authorities released new document draft restrictions and rules for regulating the use of algorithms and if passed, might require companies to restructure their businesses and give regulators access to proprietary information. This document referred to as ‘first of its kind’ in China, stating that “humans (will) have full autonomous decision-making power and the right to choose whether to accept the services provided by AI and the right to withdraw from the interaction with an AI system at any time”.
Japan has created a network of academia, businesses, and non-government organizations discussing and recommending various ethics guidelines and regulations for AI. Its federal government released an AI technology strategy paper in March 2017 that noted it would set up additional opportunities for examining the ethical aspects of AI.
What does this mean for the companies?
All the mentioned regulations above are just a first step of proposing guidelines for more ethical and responsible AI, but it is still hard for AI practitioners to operationalize them in their daily work. According to McKinsey & Company, companies and organizations still have a lot of work to do to prepare themselves for the regulations and to start working on and addressing the risks associated with AI more broadly. For instance, take Table 1 below shows that the EU regulation divides AI systems into three risk categories: unacceptable-risk AI systems, high-risk AI systems, and limited- and minimal-risk AI systems. Organizations can use this framework as a starting point in developing their own internal strategy. However, the proposed regulatory framework focuses exclusively on the risks specific AI systems pose for the public, not the broader side of why and how AI is designed or set. As non-profit digital rights organization Access Now has pointed out- this EU approach on regulations doesn’t go far enough in protecting human rights. It permits companies to adopt AI technologies so long as their operational risks are low.
Nonetheless, the sooner organizations adapt to the current regulatory reality, the greater will be their long-term success with AI technology. The first step is for each of the companies to establish an AI governance system for ensuring AI risk compliance and auditing.
This blog post is the first one of a border series called ‘Breaking down the AI regulations’ laying out the foundations of why regulations are present and important within the AI industry. The next articles will focus on the EU regulations. Follow us to read the upcoming articles of this series.
Other related sources: