Continuing from our previous article on why healthcare companies employing AI technology should be compliant with the latest regulations, we will now dive deeper into how exactly to achieve compliance. Since algorithmic bias is one of the major risk factors concerning AI systems in healthcare, the proposed AI regulations deal with frameworks suggesting mechanisms to identify and manage bias, and discuss challenges and opportunities to increase fairness in the tech-health industry. According to the latest research by Vantage Market Research , “the total Global Artificial Intelligence in Healthcare Market is estimated to reach USD 95.65 Billion by 2028, up from USD 6.60 Billion in 2021, at a compound annual growth rate (CAGR) of 46.1%”. Hence, there is no doubt we will keep seeing new AI models being developed to optimize and improve the outcome of healthcare interventions.
The compliance angle
As the market grows (see the picture above), it pressures businesses to form an internal governance structure that will tackle the compliance for newly established AI-healthcare systems. More and more companies are increasing their efforts and adopting new tools to minimize bias and make their machine learning models explainable. So, if you are an AI provider company and/or your practice uses AI models, your time to act on compliance is right now.
Companies can approach this from two angles:
(i) outsource compliance to third-party model monitoring systems, that will take care of all the proposed regulatory frameworks and ensure the models are in check before deployment;
(ii) build an internal team to validate, monitor, and analyze machine learning models for compliance.
Nonetheless, all companies need to establish technical expertise to accurately operationalize AI governance. This solution may sound simple, but the implementation is an ongoing process that is lengthy and expensive. It includes constant training, validating, analyzing and monitoring — both for compliance with industry-specific regulations and for ethical and responsible AI decisions overall.
As machine learning methods are increasingly used in healthcare settings, there is growing concern that they may reflect and perpetuate systemic inequalities and biases. The first step for complying is monitoring training and testing datasets, to stop bias inferences at the source. You can use this information to intervene as required.
For instance, our case study “Improve skin cancer diagnostics accuracy through algorithmic bias mitigation” proposes a potential solution to algorithmic bias in AI healthcare using an automated responsible AI system. We highlight that skin cancer is one the most widespread types of cancer, and many dermatologists rely on AI systems to detect it and recommend treatment to their patients. However, the existing technology is prone to diagnosing false negatives due to gender bias. In the largest publicly available collection of quality-controlled dermoscopic images of skin lesions, we found evidence of bias against males with a growth in the torso that ultimately caused the false-negative predictions.
To mitigate this risk and stay in compliance with the proposed regulatory and governance frameworks, we performed the Reweighting bias mitigation method in KOSA’s Automated Responsible AI System (ARAIS). Reweighting is an effective tool for minimizing bias. The algorithm looks at the sensitive attribute and the real label. Then, it calculates the probability of assigning favorable labels (y=1) assuming the sensitive attribute and y are independent. Thereafter, the algorithm divides the calculated theoretical probability by the true, empirical probability of this event. That is how weight is created. From these two vectors (the sensitive attribute and y ) a weight vector is created for each observation in the data. The new weights are passed back to the XGBOOST algorithm through the sample weights attribute. The model is then retrained, and the initial results are compared with the latter. The metrics used to quantify bias are Disparate Impact and Statistical Parity. After mitigating bias in the training dataset with our Data Editor, we observed an uptick in model accuracy and reduced disparate impact.
No matter the trend, keep AI human
More and more big tech companies are now joining forces to develop more compliant AI systems, especially within healthcare, as leaders across the industry, academia, and technology form coalitions to implement responsible AI and keep up with regulations. But the AI journey in healthcare is at a crossroads, as building responsible AI is not just a performative act to meet compliance regulations. It is intersectional, continuous, multi-stakeholder work that involves: evaluating and mitigating bias against different ages, genders, ethnicities, or any other sensitive attribute; understanding the human impact of the model; being in compliance with industry-specific regulations; and constant model monitoring before and after deployment.
Conclusion
Moving forward, artificial intelligence will play an essential role in the world of healthcare. If algorithmic bias is left unattended, it will persist and it will resurface in new contexts, undermining the importance of compliance regulations. As such, it is crucial to build and implement fair AI models that minimize bias and promote equality. This includes developing and distributing best practices in bias detection, as well as creating targeted programs to mitigate these issues and start ticking the boxes for AI compliance.