Why AI-based image analysis software needs Explainable AI

Photo by Irwan iwe on Unsplash

This blog post is part of different blog posts series “KOSA’s Multi-Stakeholder Explainable AI” where we will dive deep into the mystery of the questions such as “Why is the model making decisions the way it does?” “How can all stakeholders in one organization understand model behavior without getting too technical” with different Explainable AI techniques. Stay tuned for the coming blog post. If you want to learn more about our XAI tool you can check our solutions page.

The advent of artificial intelligence has led researchers to explore ways to implement this technology into medical imaging. There are several different cases for why a patient might need medical imaging. Whether it’s for cardiac events, fractures, neurological conditions, or thoracic complications, AI can quickly diagnose and provide treatment options.

Recently, research organizations, clinics, and universities have been pursuing the expansion of AI in medical imaging to answer the need for more efficient outcomes. A great example is the COVID-19 pandemic where hospitals were overpopulated with patients and scans, resulting in misdiagnosis and many patients not receiving the care in time. By implementing AI into medical imaging in treating these cases, the technology can enhance medical screenings, improve precision medicine, assess patient risk factors, and lighten the load for clinicians. Medical imaging has now been considered as one of the most promising techniques to support clinical decisions.

What is medical imaging?

Medical imaging is a medical field which consists of the use of imaging technologies to diagnose and treat diseases. Medical imaging is a broad term that includes many different types of diagnostic tests and procedures. Medical imaging technology has advanced over the years, with new options for patients who are in need of an MRI, CT scan or X-ray. These new technologies offer more accurate diagnoses and faster treatment plans.

The use of machine learning algorithms in medical image analysis has been expanded widely to most medical departments that use images for fields such as radiology, pathology, dermatology, cardiology, gastroenterology, and ophthalmology.

You can read some use cases of AI in medical imaging here.

The AI systems use different methods to analyze medical imaging. We at KOSA AI, work on the image models that use neural networks to conduct image classification or object detection, and video models that work similarly, treating each video frame as its own image.

KOSA AI Explainability tool — Image data analysis

Examples are:

-Convolutional neural networks (CNNs) that are often used for image data by labeling pixels with relative importance via heatmaps. Among the many deep learning algorithms, the convolutional neural network algorithm, which retains high performance in image pattern analysis, has proven to be beneficial in analyzing medical images with complex patterns

-Computed tomography (CT) used by machine learning algorithms with magnetic resonance imaging, ultrasound, pathology image, fundus image, and endoscope data to diagnose or classify the severity of the disease.

AI helps medical imaging diagnosis and analysis

At Tulane University, researchers discovered that AI can accurately detect and diagnose colorectal cancer by analyzing tissue scans as well or better than pathologists. The purpose of this study was to determine whether artificial intelligence could be a tool to assist pathologists in keeping up with the rising demand for services.

According to the researchers, pathologists regularly evaluate and label thousands of histopathology images to identify whether a patient has cancer. However, their average workload has significantly increased, which could lead to unintended misdiagnoses.

But, how to make sure that AI can only help in making the right decisions and works only in the direction to save time and aid efficiency? In other words, applying AI in deep neural networks on high stake cases such as in medical imaging, may cause ‘distrust’ in the system and ethical questions may be raised as to how and why machine learning algorithms can make a crucial decision on a patient’s life.

Image source: https://www.nature.com/articles/s41591-021-01595-0

Here comes Explainable Ai (XAI)

Explainable AI
Explainable AI

The challenges that the use of AI in medical imaging pose are:

  • AI bias in the image based model and how bias assumptions might influence model outcomes for different demographic groups.
  • Availability and inclusivity of the data that the algorithms are thought on.
  • Compliance with existing and future AI regulations- as healthcare industry is heavily regulated.

For these reasons, conversations have shifted towards integrating XAI methods and practices that will allow clinicians to develop understanding and appropriate trust in the AI tools they use.

At KOSA AI, we partner with clinics and AI creators in providing an XAI solution that helps identify the highest-weighted pixels within the heatmaps from the Convolutional neural networks (CNNs) approach to capture the most important image features and make the decision-making more transparent. This in turn, contributes to easier model monitoring, debugging, optimization, and performance tracking throughout the AI model on medical imaging lifecycle.

KOSA AI Explainability tool — Model Explainer using heatmap

Conclusion

AI has been able to help with many aspects of diagnosis and treatment, making it easier for clinicians to find the right diagnosis for their patients. It is a fact that there will be more and more advances in AI to better outcomes for patients. This calls for an increase in the use of tools such as XAI that makes sure that the utilization of AI does not come at the cost of patients’ lives or livelihoods.

Want to know more how KOSA can help you make sure that the potential risks of the AI system in medical imaging are mitigated and can still reap all the benefits:

Request a Demo.

This blog post is part of different blog posts series “KOSA’s Multi-Stakeholder Explainable AI” where we will dive deep into the mystery of the questions such as “Why is the model making decisions the way it does?” “How can all stakeholders in one organization understand model behavior without getting too technical” with different Explainable AI techniques. Stay tuned for the coming blog post. If you want to learn more about our XAI tool you can check our solutions page.

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
KOSA AI

Making technology more inclusive of all ages, genders, and races.