How can responsible AI change the system?

KOSA AI
5 min readJul 30, 2021

--

Responsible AI is used to design more efficient systems that provide essential services for people in need.

AI is a great tool to boost our productivity and unlock human potential. It is also used to design more efficient systems that provide essential services for people in need.

These days, AI-powered applications are being developed at an astonishing pace. In the coming years, these innovations will change our lives as much as smartphones have over the past decade — but they’ll do so with a greater focus on social impact. Tech giants and renowned educational institutions are spending over 20 billions of dollars annually on making AI more prominent. So, this post will explore why and how responsible AI can change the system by offering insights from leading examples in this space about current use cases and an example of how a company can adopt responsible AI.

What is responsible AI and why does it matter?

Responsible AI is a framework that documents how a specific organization is addressing the challenges around artificial intelligence from both social justice and legal point of view. Among the discussions in the tech world that are going around this topic, there is a thin line between how responsibility is done and can be done. Our future will be defined by two forces: the advancement of technology and the human direction of and response to that advancement. Last week we talked about AI bias and how the development of AI is closely linked to some real-life problems happening around the world. From workplace discrimination, challenges of unfair treatment, managing risk, the fallacy in the divisibility of datasets- all of this is related to how we as humanity will take upon dealing with these issues. But most importantly, how we will incorporate AI in that. And here comes the reign of responsibility in artificial intelligence. Clearly, we can use the evolution of AI to create new opportunities to improve the lives of people around the world- from business to healthcare to education. However, this raises the question of how best to do this and create fair and trustworthy AI standards for responsibility.

The leaders taking upon this field are Google and Microsoft. Both of these companies have gathered a team of experts that are only dedicated to reviewing its AI and advanced technologies if they fit in the responsibility criteria. The main guiding principles that they are incorporating are machine-learning fairness, security, privacy, human rights, social sciences, and cultural contexts. And why are they doing this?

The European Commission recently published the Ethics guidelines for trustworthy AI saying that “On artificial intelligence, trust is a must- not a nice to have…when the safety and fundamental rights of EU citizens are at stake.” The final goal is to have a widely adopted governance framework of AI best practices that will make it easier for organizations around the globe to ensure their AI programming is human-centered, interpretable, and explainable. But still, there is plenty of work to be done. According to PwC’s annual AI Predictions survey, only 35% of respondents have plans to improve their systems and processes in 2021 to be AI responsible.

Changing the current system and ensuring responsibility in the AI systems is an ongoing process

Let’s start with some questions that one may ask when incorporating responsible AI.

As study shows that Black pedestrians passed by twice as many cars and 5% more likely to be hit than white pedestrians in self-driving algorithms, such questions in the automotive industry should be: Who is responsible for the accidents caused by the self-driving car or can [how] a car decide when faced in a moral dilemma?

Or, how can technological advancement be combined with educational programs and thus ensure that educational workers will not lose their jobs and the uniquely human capabilities are preserved when it comes to human emotions and behavior change support.

On the other hand, responsible AI models can be used in many diverse aspects to help our everyday living. A great example is a machine learning model initially made for tracking whales guided by responsible AI principles helping researchers, biologists, and field workers to better understand the oceans and protect key biodiversity areas.

These cases show that the impact of AI increases across sectors and societies, and it is critical to work towards systems that are fair and inclusive to everyone. This means that understanding and trusting AI systems are important in ensuring they are working as intended.

KOSA AI Responsible AI Self-Assessment Tool

There are great initiatives that are crack-opening the case of responsible AI and contribute to the flow of establishing a universal framework. For instance, KOSA AI has done research and studied the various responsible AI frameworks that are out there, such as the design assistant that the Responsible Artificial Intelligence Institute is proposing and the Accenture Code of Business Ethics. KOSA AI has created its Responsible AI Self-Assessment tool based on the 5 most important aspects when it comes to responsible AI. The guiding principles are accountability, fairness, transparency, safety, and robustness. It checks if the current AI system functions as intended under various scenarios, and it helps organizations to quickly check their blind spots and get a comprehensive guide as an end result.

KOSA AI Responsible AI Self-Assessment Tool- End results overview

It is a fact that AI has the power to change the current system in both good and bad ways, but it is human influence to create AI responsibly now so that it can be used positively for generations to come. And whether we like it or not, AI is helping shape the future of technology!

--

--

KOSA AI

Making technology more inclusive of all ages, genders, and races.