Breaking down the AI healthcare regulations
In the series “Breaking down the EU AI regulations” so far we’ve talked about the AI regulations and specifically what the EU did on this topic. To recap, AI regulations in the EU are currently governed mostly by frameworks and rules which are still only proposals and are not yet in force. The companies that are somehow connected to the creation and use of AI will have to let their users know about their use of AI systems (or AI algorithms) and have to be able to explain why their AI model makes certain decisions. There is an AI Act about rules to put new AI-enabled systems into the market. This includes what AI systems cannot be marketed, what information should be published so that users may know about the AI systems and their operational capabilities, what technical requirements are there, requirements for datasets that are used to train AI systems, and also requirements after the AI systems are put in circulation. Read more about this in our previous article.
Going forward, with this article we are diving deep into the EU AI regulations specifically for the healthcare industry.
Regulating AI in the healthcare
Healthcare is generally a heavily regulated industry, so the AI regulations must comply with that as well. As AI is increasingly being developed for applications in the healthcare industry, the standards for regulations are high as it is narrowed down to impacting and saving human lives.
There are plenty of AI bias cases where a patient’s diagnosis and treatment are affected by their gender or ethnic background, exacerbating existing inequalities in healthcare. Only recently, it has been published that there is a potential bias in the design and use of medical devices and/or that AI system(s) being developed to diagnose skin cancer run the risk of being less accurate for people with dark skin. This further elaborates on the fact that the data used to develop AI systems for the healthcare industry run the risk of being less accurate for underrepresented people.
To deal with this, experts point out that the power of artificial intelligence must be used to revolutionize the healthcare industry for better, where machine learning is built into the AI algorithms to enable learning, evolving, and improving the performance from the mistakes it does. However, as seen from the bias examples above, this is such a complex and sensitive challenge demanding proper regulation and rules to make sure it is responsibly done.
The EU regulations encountering this so far make recommendations on how these issues should be tackled in the creation of medical AI devices from design to use and how to ensure that there are no defective diagnoses, unacceptable use of personal data, and the elimination of bias built into the healthcare algorithms.
Responsible and compliant healthcare AI
Our analysis on the current AI regulations related to the healthcare industry and how to ensure responsible AI practices throughout, points out that there are three main regulatory frameworks- the EU Medical Devices Regulation, the General Data Protection Regulation (GDPR), and specific EU AI regulations. All three of them must complement each other to be constructive in the real world. Furthermore, when taking into consideration the EU AI regulations, the conversation about the three levels of AI risk: unacceptable-risk AI systems, high-risk AI systems, and limited- and minimal-risk AI systems, is at the frontline.
Healthcare AI would generally fall into the high-risk category and would need to fulfill the following criteria to achieve regulatory approval:
- Detailed technical documentation providing all information necessary on the system and its purpose, for authorities to assess its compliance
- Adequate risk assessment and bias mitigation systems
- High quality of the datasets feeding the system to reduce risks and discriminatory outcomes
- Logging of activity to ensure traceability of results
- Clear and adequate information to the user
- Appropriate human oversight measures to reduce risk
- High level of robustness, security, and accuracy
According to this, to be in compliance with the regulations and ensure responsible AI systems, healthcare companies and all the providers of AI healthcare systems must also provide documentation containing information about the algorithms and the logic of algorithms used in the AI system. Furthermore, the GDPR states that decisions made solely by AI algorithms are forbidden and there should be a ‘meaningful human touch’ behind it. So, if a medical device is using algorithms to make decisions, then the manufacturer of the device (or the users of the device such as hospitals) will have to provide an explanation to the data subjects as well as the logic behind the decisions made by the algorithm. This will require ongoing monitoring of AI systems. If an AI system starts to malfunction, this needs to be found out by the operator of the high-risk AI system and needs to be fixed.
Furthermore, picture 1 above refers to the most important processes that AI data should undergo to make sure it is responsible and in compliance. These processes are also highlighted in our case study “Improve skin cancer diagnostics accuracy through algorithmic bias mitigation” where we explore how the reweighting bias mitigation method in KOSA’s Automated Responsible AI System (ARAIS) is used to mitigate and detect bias in the training dataset of a skin cancer AI diagnostics system.
Read the full case study here.
Challenges of regulating AI in healthcare and conclusion
According to EY, the AI regulation of healthcare is still in its infancy and regulators are playing catch-up stressing out that there must be concrete laws in place that unlock AI’s full potential while fully protecting the rights of the individual. As the overall analysis indicates that the healthcare AI market will be worth more than $61 billion within the next decade, the development of AI in healthcare together with the regulations must ensure that systemic bias inherited in the AI healthcare system is eliminated since it’s the only way out and also life-saving.
This blog post is part of a border series called ‘Breaking down the AI regulations’ laying out the foundations of why regulations are present and important within the AI industry. The next articles will cover the EU AI regulations for the finance sector with in-depth analysis. Read all the previous articles here.
Improve skin cancer diagnostics accuracy through algorithmic bias mitigation KOSA AI Case Study
AI projects to tackle racial inequality in UK healthcare, says Javid
The EU Is Proposing Regulations On AI — And The Impact On Healthcare Could Be Significant
Top Ten Legal Considerations for Use and/or Development of Artificial Intelligence in Health Care