Quantifying Uncertainty in Deep Learning of Radiologic Images

Shahriar Faghani, Mana Moassefi, Pouria Rouzrokh, Bardia Khosravi, Francis I. Baffour, Michael D. Ringler, Bradley J. Erickson

Research output: Contribution to journalReview articlepeer-review

Abstract

In recent years, deep learning (DL) has shown impressive performance in radiologic image analysis. However, for a DL model to be useful in a real-world setting, its confidence in a prediction must also be known. Each DL model’s output has an estimated probability, and these estimated probabilities are not always reliable. Uncertainty represents the trustworthiness (validity) of estimated probabilities. The higher the uncertainty, the lower the validity. Uncertainty quantification (UQ) methods determine the uncertainty level of each prediction. Predictions made without UQ methods are generally not trustworthy. By implementing UQ in medical DL models, users can be alerted when a model does not have enough information to make a confident decision. Consequently, a medical expert could reevaluate the uncertain cases, which would eventually lead to gaining more trust when using a model. This review focuses on recent trends using UQ methods in DL radiologic image analysis within a conceptual framework. Also discussed in this review are potential applications, challenges, and future directions of UQ in DL radiologic image analysis.

Original languageEnglish (US)
Article numbere222217
JournalRadiology
Volume308
Issue number2
DOIs
StatePublished - Aug 2023

ASJC Scopus subject areas

  • Radiology Nuclear Medicine and imaging

Fingerprint

Dive into the research topics of 'Quantifying Uncertainty in Deep Learning of Radiologic Images'. Together they form a unique fingerprint.

Cite this