AI has become more widely adopted not only in the tech world, but also throughout every industry. Reading our last article on why you should care about AI governance now, it is the perfect time to catch the momentum of exposing the real trouble- AI bias. In this post, you can read all about the term AI bias and catch on 2 real-life examples.
How does AI become bias
Incomplete training data and prejudiced assumptions made during algorithm development contribute to bias AI algorithms. The root causes of AI bias are plenty and are deeply embedded in our society. This makes the AI bias a complex issue to be tackled. Two of the major problems worth mentioning are:
Lack of complete training data- Data that is used for creating the AI algorithms may be incomplete, which makes it not representative and bias. For example, a Silicon Valley based tech company develops an AI software and hands it to a public enterprise in Africa. The software has incomplete data of the specific demographic trends from Africa, and it does not represent the whole population, creating bias in the AI system.
Cognitive bias- These are unconscious assumptions towards a person or a group based on their origin and/or societal status. A type of cognitive bias is the prejudice bias that can be slipped in the AI training data in a form of a stereotype that the data scientist and/or AI expert had. Speaking of this, COMPAS is an algorithm widely used in the US to guide sentencing by predicting the likelihood of a criminal reoffending, showing that black people hold 43% higher risk to be re-offended than the white.
Artificial Intelligence bias is human-made
But, why some groups of people are receiving different treatment, even from the AI systems? Well, the data that is used to train the algorithms in software is based on historical data sets that reflect the bias accumulated throughout history. Although the bias present in the AI systems today may be unintentional, they pose serious consequences on the impact that technology has.
Let’s go back to 1988 for a minute, shall we? A British medical school was found guilty of discrimination as its computer program was being bias against those with non-European names. Thirty-something years later, AI has caught the potential to be more open for equality, but there are still challenges that need to be faced so history does not repeat itself.
Ai bias can be mitigated and reduced with the help of more inclusive datasets and by collaborative and diverse efforts. But, applying AI in our day-to-day lives wouldn’t solve our society’s problem of bias treatment against underrepresented groups.
If we talk about gender justice, can we say that AI is enhancing the gender bias? Women are 47% more likely to be seriously injured and 17% more likely to die than a man in a car accident as the data that the safety belts’ algorithms are designed from, lack the measurements of women’s pregnant bodies or plus-sizes. Or, the fact that racial bias is also very present in one of the pivotal parts of our survival, the healthcare industry. The algorithms of the hospitals’ software contain AI bias, and only a small percent of 17.7% of black people will receive extra care. The figures even double, if AI gender bias and AI racial bias intersect each other, as AI systems have higher error rates for women, especially for darker-skinned females.
Reality speaks for itself: 2 examples of AI gender bias and AI racial bias
Not to throw a shade, but it seems like Amazon’s case became a leading example of why to start eliminating AI bias. In 2015, it was found that the algorithm used for hiring employees is biased against women. One could think, how this can be slipped through the biggest tech powerhouse? Well, the AI algorithmic bias was present, as its hiring dataset was filled only with men’s resumes.
Meanwhile, facial recognition AI systems falsely identify African-American and Asian faces 10 to 100 times more than Caucasian faces, posing worries as this technology is a crucial tool used in the criminal justice system.
All in all, there are huge data gaps that still need to be filled so variables such as gender and/or race are mitigated and removed from AI. Is it redefining the training data frameworks to be more representative or take a whole new holistic approach to address the bias injustices in general? One thing is certain, AI can help us in doing the stepping stone for changing our world for good.
If you want to learn more about how your technology can mitigate AI bias, click here to see KOSA AI’s automated system solution.
Read more about AI bias:
- EU regulation sets fines of €20M or up to 4% of turnover for AI misuse
- In-depth guide to ai bias in the enterprise
- The role of bias in artificial intelligence
- Feminist perspective on dealing with bias in artificial intelligence
- Example of AI bias in the healthcare industry: The case of Babylon
- Why fighting algorithmic bias needs to be ‘a priority’
- More about racial discrimination in face recognition technology