Explainable AI is not enough. Here’s why.

KOSA AI
5 min readSep 21, 2021

--

Explainable AI is not enough, but what’s the missing piece to achieve responsible AI system?

There is no doubt that with the rise of the AI industry, people are getting worried about the use of AI in their everyday life. Only recently, we started seeing numbers such as 50% of AI users encountered at least two incidents in the last 2–3 years due to bias in AI systems. This raised various ethical concerns about how the decisions of the AI algorithms are being made, do we trust them and how will they affect humanity. The issue is that AI decisions and outcomes often lack justification that makes the doors of the inner world of the autonomous systems of AI open and be explained.

What is the explainable AI?

Explainable AI helps data scientists, auditors, and business decision-makers to ensure that AI systems can reasonably justify their decisions and explain how they reach their conclusions. Adding to this, the importance of explaining the accuracy of the AI algorithms and what influenced the certain outcome they are showing. Within the scope of the EU’s GDPR, the regulatory requirements are that companies need to have explainable AI, validate their models, and must be able to provide transparency to gain trust.

As AI is becoming more advanced, all the mistakes it has made along the way need to be recorded so they are not duplicated. In the ‘explainable AI’ language, these mistakes are referred to as “black boxes” created by the AI data itself, and most of the time these are impossible to be interpreted.

To comply with the GDPR quickly, AI creators put into practice explainable AI frameworks. But, the real question is how to justify the gender bias in the Apple Card algorithms when they have already decided to extend more credit to men over women or stop developing, and/or research facial recognition technology when the damage of racial AI bias has already been done?!

Let’s not let history repeat itself

The above-mentioned issues in AI so far have been detected after the system has been already deployed and the AI model is in use. So, the explainable AI is put in practice to better understand the strengths and limitations of a system, but not really to prevent ethical issues from happening. Having in mind the fact that the use of AI grew for 270% over the past 4 years, it’s harder to change AI systems once they are implemented, especially at the corporations level, so ‘fixing’ the already existing AI system by the ability to explain it is not enough. As Capgemini stated in their research:

Consumers, employees and citizens will reward organizations that proactively show their AI systems are ethical, fair and transparent

there needs to be an addition to the explainable AI principle.

What’s the missing puzzle piece?

For one AI system to be socially just and manifest organizations’ values and principles in the decisions made by its algorithms, there needs to be a responsible AI framework applied to ensure its integrity. And to achieve this, the AI systems must have embedded ethics in their design. There is a whole new approach dedicated to this called “Ethics by design” that aims at the systematic inclusion of ethical values, principles, requirements, and procedures into the AI design, development processes, deployment, and throughout the entire AI pipeline. It’s an approach integrating the whole AI lifecycle and everyone related to it, the technical and also the non-technical stakeholders in order to minimize risks and maximize fairness. “Ethics by design” encourages asking ethical questions throughout the whole process, which is not the case at the moment with most of the AI projects where social dilemmas are usually asked only at the beginning and/or after an issue appears.

Overview of the insight context of an internal
 algorithmic audit with all the stakeholders included.
Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing

Such a framework that ensures ‘ethics by design’ has been proposed by researchers at Google for algorithmic auditing that supports artificial intelligence system development end-to-end. It is named Scoping, Mapping, Artifact Collection, Testing, and Reflection (SMACTR) and it identifies risks and reduces them to the lowest degree possible. SMACTR is an internal auditing workflow that tries to operationalize identification of risk and bias as well as an evaluation tool for different stakeholders to evaluate their AI system for ethical deployment. One of the integral parts of this framework is that it holds the stakeholders accountable for creating fair AI systems. Plus, it can map out how things can be done differently in the future including social or ethical challenges.

Closer look at the Google SMACTR framework that covers the whole lifespan of the AI and machine learning process ensuring ethics by design:

Google SMACTR framework covers the whole lifespan of the AI and machine learning process ensuring ethics by design.
Displaying what ‘ethics by design’ approach covers under the Google SMACTR framework

In our previous article, we highlighted how critical responsible AI is for our civilization and how important it is to take liability for the effects that AI creations have on society and people. Explainable AI in itself is not enough as it is just one key requirement for implementing responsible AI practices. The other integral parts that must be utilized along the way are preventive and auditing schemes that are being ensured by the ‘ethics by design’ approach.

--

--

KOSA AI
KOSA AI

Written by KOSA AI

Making technology more inclusive of all ages, genders, and races.

No responses yet