When is AI Transparent enough? WSAI22’s Responsible AI track

KOSA AI
5 min readOct 17, 2022

Last week the world’s leading and largest AI summit gathered the global AI ecosystem of enterprise, big tech, startups, investors and science. The summit is no news, it has been happening every October in Amsterdam with its main mission:

  • to tackle head-on the most burning AI issues,
  • to set the global AI agenda,
  • and provide its platform to people whose research/work has a real impact on the future of our world.

Not different from the previous years, an incredible lineup of experts in the field of Responsible AI were steering the conversations on day 1 around ethics, trust and ways forward in revolutionizing the AI industry.

KOSA AI was also part of the RESPONSIBLE AI: GETTING IT RIGHT track with Layla LI, KOSA’s CEO, as a speaker on the panel discussion that touched on a very intriguing question, “When is AI Transparent enough?”.

What is Transparent AI?

Transparency in artificial intelligence machines allows us to understand the ethical implications of the algorithms and how they are affecting our lives. Transparency in AI is not just about understanding how AI works, but also about understanding how it impacts our lives and society.

So, it is the key to building trust in AI technology. If people know how the AI system is making decisions, they will be more likely to accept a decision made by the system.

There are two types of transparency in AI:

- Transparency of inputs: This means that the input data that were used to generate a result are visible to those who receive it.

- Transparency of outputs: This means that the result generated by an AI system and the factors used to generate it are visible.

Why is there a need for Transparent AI?

IBM researched that 56% of the companies are building AI models on data that has inherent bias (social, economic, and so on). Another global survey about the state of AI shows that 50% of AI users encountered at least two incidents in the last 2–3 years due to bias in AI systems.

There are multiple reasons why we need transparency in AI, but the two most relevant to our conversation are:

  • Transparency mitigates system’s risks and puts trust and equity within them

The stats above pose various ethical concerns and pressure on the AI creators and users to take actions on explaining how the decisions of the AI algorithms are being made, do first of all they trust them and how will they affect society.

  • Transparency helps companies be more compliant with the AI regulations

Within the scope of the EU’s GDPR, the regulatory requirements are that companies need to have explainable AI, validate their models, and must be able to provide transparency to gain trust.

Moreover, the EU AI Act draft requires that these companies would notify end-users of the capabilities and limitations of the AI system, including its intended purpose, accuracy and robustness, how it may cause harm, and so on.

On the question of why we need transparency in AI, our CEO Layla LI, as part of the panel discussion, remarked that many people impacted by algorithmic decision making aren’t even aware of it.

‘For example, people apply for a bank loan, but oftentimes they are not aware who makes the final decision on whether or not they will get the loan, why is the decision as it is and who decides on the factors affecting that decision. ’ she says.

This simple example unlocks the backbone of why we are even talking about such a question in a world summit with the most influential people in the industry.

At the event it was mentioned that “Transparency has to be a wider frame than just knowing everything about a system”, when people say that we want AI systems’ to be transparent, they mean that they want to know when the AI decisions interacts with people, how the results it generates can be justified and if there is a legal system in place supporting it.

But can transparency solve everything? As we embark on the journey where currently new AI tools are being created constantly and in a short amount of time, it is more than possible to have an entirely transparent AI system that still can be harmful. For this, AI creators, organizations and users need to continuously discuss and work together to establish barriers embedded with human wisdom and morals within the AI systems. However, too much trust can also have negative consequences.

The discussions on the panel shifted towards the direction where such conversations can be confusing and misleading the way towards AI becoming conscious or sentient.

Artificial Intelligence is not conscious or sentient

To start with, we need to distinguish between intelligence, i.e. the ability to solve problems, and consciousness. Intelligence, in the sense of the ability to solve problems, is not the same as consciousness. Intelligence is a property of an agent; consciousness is a property of a human being.

As Dr. van Den Hoven mentioned on the panel, “the consciousness discussion is seductive because humans wants to offload their responsibility onto the machine”, assuming a machine’s judgment is better than a person’s is called automation bias and it obscures the fact that machine learning makes predictions and decisions only as good as the data it’s fed and the parameters set.

So, AI must be steered deliberately by humans, not the other way around. Which is why it’s time to make AI work for us, instead of we working for AI.

As Layla’s final words on the panel discussion describe “AI only mimics what we tell it to do. This is an opportunity for us to start creating responsible AI and make people accountable for systems they put out there”.

Conclusion

The topic of transparency is still an on-going debate and a work in progress as it touches upon critical issues of the tech industry and the rise of the AI industry. But, when can it be transparent enough? It is a matter of mutual cooperation and decision making on metrics to decide whether AI is being developed in a safe and ethical way, how to operationalize them and what are the legal channels for enforcing transparency obligations in the EU and globally.

--

--

KOSA AI

Making technology more inclusive of all ages, genders, and races.