Explainable AI in medical imaging: An overview for clinical practitioners – Saliency-based XAI approaches

Abstract

Since recent achievements of Artificial Intelligence (AI) have proven significant success and promising results throughout many fields of application during the last decade, AI has also become an essential part of medical research. The improving data availability, coupled with advances in high-performance computing and innovative algorithms, has increased AI’s potential in various aspects. Because AI rapidly reshapes research and promotes the development of personalized clinical care, alongside its implementation arises an urgent need for a deep understanding of its inner workings, especially in high-stake domains. However, such systems can be highly complex and opaque, limiting the possibility of an immediate understanding of the system’s decisions. Regarding the medical field, a high impact is attributed to these decisions as physicians and patients can only fully trust AI systems when reasonably communicating the origin of their results, simultaneously enabling the identification of errors and biases. Explainable AI (XAI), becoming an increasingly important field of research in recent years, promotes the formulation of explainability methods and provides a rationale allowing users to comprehend the results generated by AI systems. In this paper, we investigate the application of XAI in medical imaging, addressing a broad audience, especially healthcare professionals. The content focuses on definitions and taxonomies, standard methods and approaches, advantages, limitations, and examples representing the current state of research regarding XAI in medical imaging. This paper focuses on saliency-based XAI methods, where the explanation can be provided directly on the input data (image) and which naturally are of special importance in medical imaging.

Publication
European Journal of Radiology
Katarzyna Borys
Katarzyna Borys
Researcher in the first cohort

My research interests include Deep Learning, Computer Vision, Radiomics, and Explainable AI.

Nicole Krämer
Nicole Krämer
Principal Investigator

My research interests include Psychology, Human-Computer Interaction, and Explainable AI.

Christoph M. Friedrich
Christoph M. Friedrich
Co-Speaker

My research interests include Deep Learning, Computer Vision, Radiomics, and Explainable AI.

Felix Nensa
Felix Nensa
Speaker

My research interests include medical digitalization, computer vision and radiology.

Next
Previous