Our team has contributed to a pivotal study, "Quantitative Evaluation of Saliency-Based Explainable Artificial Intelligence (XAI) Methods in Deep Learning-Based Mammogram Analysis," which assesses the effectiveness of XAI techniques in breast cancer detection.
Overview:
Explainable AI (XAI) is becoming crucial in deciphering the decisions of deep learning models, particularly in medical imaging. This study focuses on the quantitative evaluation of popular saliency-based XAI methods like Gradient-weighted Class Activation Mapping (Grad-CAM), Grad-CAM++, and Eigen-CAM.
Methods and Results:
Using a balanced dataset from three centers, comprising 1,496 mammograms, three radiologists outlined ground-truth areas indicating cancer presence. The study employed a modified, pre-trained deep learning model for detection, analyzing the alignment of saliency maps with radiologist-drawn boundaries using the Pointing Game metric. The findings revealed Pointing Game Scores of 0.41 for Grad-CAM, 0.30 for Grad-CAM++, and 0.35 for Eigen-CAM, indicating a moderate success in accurately identifying cancerous lesions.
Conclusion:
Although saliency-based XAI methods offer some level of interpretability, they often do not fully clarify how decisions are made within deep learning models. The study underscores the need for further refinement in XAI methods to enhance their utility and reliability in clinical settings.
For those in the field of medical imaging and AI, this study presents significant insights into the current capabilities and limitations of XAI methods.
https://www.ejradiology.com/article/S0720-048X(24)00072-X/fulltext