4 Reasons You Should Care About AI Governance Now

KOSA AI
4 min readMay 7, 2021

--

Have you seen AI bias make headlines recently from Amazon’s sexist AI recruiting tool to medical algorithms showing racial bias? In the new age of big data that we live in, this is increasingly relevant for companies that utilize AI’s automation power. If you’re not convinced why you should care, here are 4 reasons to act now.

  1. AI Governance is becoming part of the regulatory landscape

Maybe you think this is all too soon. AI is just a buzz word, your organization is just starting a digital transformation, or you’re already market leader waiting to take the next step. Regulatory changes are happening all over the world. According to the OECD, there are already over 300 AI policy initiatives from 60 countries and territories. Most notably, a leaked draft of the AI regulation by the European Commission reveals that the lawmakers are considering fines of up to 4% of global annual turnover or €20M (whichever is greater), for a set of risky AI use-cases.

Uber was recently sued by drivers in Europe over automated robo-firing. And Uber lost. The challenge references Article 22 of the European Union’s General Data Protection Regulation (GDPR) — which provides protection for individuals against purely automated decisions with a legal or significant impact.

If you haven’t yet started thinking about how you scale your AI operations responsibly, you should.

  1. Your customers care

The risk mitigation factor pales in comparison to the benefits gained from strengthening your companies’ core values. Companies that embrace responsible AI can make boost their bottom line and differentiate the brand, because your customers care.

Organizations with a strong sense of purpose are more than twice as likely to generate above-average shareholder returns, whereas AI without integrity will fail brands every time. Big players such as Salesforce, Microsoft, and Google have publicized the robust steps they have taken to implement Responsible AI. And for good reason: people weigh ethics three times more heavily than competence when assessing a company’s trustworthiness, according to Edelman research.

And more and more customers are consciously choosing to do business with companies whose demonstrated values are aligned with their own are. Companies that deliver a positive impact on society boast higher margins and valuations. Organizations must make sure that their AI initiatives are aligned with what they truly value and the positive impact they seek to make through their purpose.

  1. Your employees care

And not just your customers, your top talent also care about ethics. In UK, 1 in 6 elite AI workers has quit their job rather than help to build potentially harmful products.

Timnit Gebru, a leading AI Ethics researcher, was recently fired from Google after criticizing its approach to minority hiring and the biases built into today’s artificial intelligence systems. And two engineers quit over the company’s treatment of their top talents.

A well-thought-out responsible AI program not only empowers talent to innovate with human impact at the front and center of their work, but also can lead to improvements in recruiting and retention, who would have thought?

  1. More inclusive AIs = More revenues

In the US, BCG research shows that companies lost one-third of revenue from affected customers in the year following a data misuse incident. Lack of trust carries a high financial cost. However, let’s not wait until disaster struck, the latent biases in your AI today are already costing you hidden millions.

Most AIs are trained on historic data that are coded with bias and missed opportunities. According to Gartner, “By 2022, 85% of AI projects will deliver erroneous outcomes due to bias in data, algorithms, or the teams responsible for managing them. This is not just a problem for gender inequality — it also undermines the usefulness of AI”.

Developing an algorithm that accurately performs on the whole spectrum of human diversity is also much more likely to deliver superior value to a broader and varied group of potential customers. Ultimately, expanding your AI program to ensure fairness and transparency will enhance revenue.

AI governance is a team effort. It’s not just up to your data scientists to figure out what safeguards to put in place against undesired outcomes from your AI systems. Stakeholders across the organizations need to work together to develop a robust process that ensures accountability, transparency, fairness, safety and resilience. And always keep humans in the loop, be that your customers, your employees, or the public.

At KOSA AI, we design and build Automated Responsible AI System that helps multiple stakeholders in the organization to evaluate and solve critical issues throughout the machine learning process, so they can understand and trust the AI systems they’re building. Request a demo at www.kosa.ai.

--

--

KOSA AI
KOSA AI

Written by KOSA AI

Making technology more inclusive of all ages, genders, and races.

No responses yet