J Korean Soc Radiol.  2024 Sep;85(5):834-847. 10.3348/jksr.2024.0118.

Explainable & Safe Artificial Intelligence in Radiology

Affiliations
  • 1Laboratory of Medical Imaging and Computation, Department of Radiology, Massachusetts General Hospital and Harvard Medical School, Boston, MA, USA
  • 2KU-KIST Graduate School of Converging Science and Technology at Korea University, Seoul, Korea
  • 3Kempner Institute, Harvard University, Boston, MA, USA

Abstract

Artificial intelligence (AI) is transforming radiology with improved diagnostic accuracy and efficiency, but prediction uncertainty remains a critical challenge. This review examines key sources of uncertainty—out-of-distribution, aleatoric, and model uncertainties—and highlights the importance of independent confidence metrics and explainable AI for safe integration. Independent confidence metrics assess the reliability of AI predictions, while explainable AI provides transparency, enhancing collaboration between AI and radiologists. The development of zero-error tolerance models, designed to minimize errors, sets new standards for safety. Addressing these challenges will enable AI to become a trusted partner in radiology, advancing care standards and patient outcomes.

Keyword

Explainable AI; Safe AI; Zero-Error Tolerance Model
Full Text Links
  • JKSR
Actions
Cited
CITED
export Copy
Close
Share
  • Twitter
  • Facebook
Similar articles
Copyright © 2024 by Korean Association of Medical Journal Editors. All rights reserved.     E-mail: koreamed@kamje.or.kr