Uncertainty-aware deep learning in healthcare: A scoping review.

Mistrust is a major barrier to implementing deep learning in healthcare settings. Entrustment could be earned by conveying model certainty, or the probability that a given model output is accurate, but the use of uncertainty estimation for deep learning entrustment is largely unexplored, and there i...

Full description

Saved in:
Bibliographic Details
Main Authors: Tyler J Loftus, Benjamin Shickel, Matthew M Ruppert, Jeremy A Balch, Tezcan Ozrazgat-Baslanti, Patrick J Tighe, Philip A Efron, William R Hogan, Parisa Rashidi, Gilbert R Upchurch, Azra Bihorac
Format: Article
Language:English
Published: Public Library of Science (PLoS) 2022-01-01
Series:PLOS Digital Health
Online Access:https://journals.plos.org/digitalhealth/article/file?id=10.1371/journal.pdig.0000085&type=printable
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832539967578963968
author Tyler J Loftus
Benjamin Shickel
Matthew M Ruppert
Jeremy A Balch
Tezcan Ozrazgat-Baslanti
Patrick J Tighe
Philip A Efron
William R Hogan
Parisa Rashidi
Gilbert R Upchurch
Azra Bihorac
author_facet Tyler J Loftus
Benjamin Shickel
Matthew M Ruppert
Jeremy A Balch
Tezcan Ozrazgat-Baslanti
Patrick J Tighe
Philip A Efron
William R Hogan
Parisa Rashidi
Gilbert R Upchurch
Azra Bihorac
author_sort Tyler J Loftus
collection DOAJ
description Mistrust is a major barrier to implementing deep learning in healthcare settings. Entrustment could be earned by conveying model certainty, or the probability that a given model output is accurate, but the use of uncertainty estimation for deep learning entrustment is largely unexplored, and there is no consensus regarding optimal methods for quantifying uncertainty. Our purpose is to critically evaluate methods for quantifying uncertainty in deep learning for healthcare applications and propose a conceptual framework for specifying certainty of deep learning predictions. We searched Embase, MEDLINE, and PubMed databases for articles relevant to study objectives, complying with PRISMA guidelines, rated study quality using validated tools, and extracted data according to modified CHARMS criteria. Among 30 included studies, 24 described medical imaging applications. All imaging model architectures used convolutional neural networks or a variation thereof. The predominant method for quantifying uncertainty was Monte Carlo dropout, producing predictions from multiple networks for which different neurons have dropped out and measuring variance across the distribution of resulting predictions. Conformal prediction offered similar strong performance in estimating uncertainty, along with ease of interpretation and application not only to deep learning but also to other machine learning approaches. Among the six articles describing non-imaging applications, model architectures and uncertainty estimation methods were heterogeneous, but predictive performance was generally strong, and uncertainty estimation was effective in comparing modeling methods. Overall, the use of model learning curves to quantify epistemic uncertainty (attributable to model parameters) was sparse. Heterogeneity in reporting methods precluded the performance of a meta-analysis. Uncertainty estimation methods have the potential to identify rare but important misclassifications made by deep learning models and compare modeling methods, which could build patient and clinician trust in deep learning applications in healthcare. Efficient maturation of this field will require standardized guidelines for reporting performance and uncertainty metrics.
format Article
id doaj-art-5aebf1c7fcd842e8a489a2b5d8b4711f
institution Kabale University
issn 2767-3170
language English
publishDate 2022-01-01
publisher Public Library of Science (PLoS)
record_format Article
series PLOS Digital Health
spelling doaj-art-5aebf1c7fcd842e8a489a2b5d8b4711f2025-02-05T05:33:38ZengPublic Library of Science (PLoS)PLOS Digital Health2767-31702022-01-0118e000008510.1371/journal.pdig.0000085Uncertainty-aware deep learning in healthcare: A scoping review.Tyler J LoftusBenjamin ShickelMatthew M RuppertJeremy A BalchTezcan Ozrazgat-BaslantiPatrick J TighePhilip A EfronWilliam R HoganParisa RashidiGilbert R UpchurchAzra BihoracMistrust is a major barrier to implementing deep learning in healthcare settings. Entrustment could be earned by conveying model certainty, or the probability that a given model output is accurate, but the use of uncertainty estimation for deep learning entrustment is largely unexplored, and there is no consensus regarding optimal methods for quantifying uncertainty. Our purpose is to critically evaluate methods for quantifying uncertainty in deep learning for healthcare applications and propose a conceptual framework for specifying certainty of deep learning predictions. We searched Embase, MEDLINE, and PubMed databases for articles relevant to study objectives, complying with PRISMA guidelines, rated study quality using validated tools, and extracted data according to modified CHARMS criteria. Among 30 included studies, 24 described medical imaging applications. All imaging model architectures used convolutional neural networks or a variation thereof. The predominant method for quantifying uncertainty was Monte Carlo dropout, producing predictions from multiple networks for which different neurons have dropped out and measuring variance across the distribution of resulting predictions. Conformal prediction offered similar strong performance in estimating uncertainty, along with ease of interpretation and application not only to deep learning but also to other machine learning approaches. Among the six articles describing non-imaging applications, model architectures and uncertainty estimation methods were heterogeneous, but predictive performance was generally strong, and uncertainty estimation was effective in comparing modeling methods. Overall, the use of model learning curves to quantify epistemic uncertainty (attributable to model parameters) was sparse. Heterogeneity in reporting methods precluded the performance of a meta-analysis. Uncertainty estimation methods have the potential to identify rare but important misclassifications made by deep learning models and compare modeling methods, which could build patient and clinician trust in deep learning applications in healthcare. Efficient maturation of this field will require standardized guidelines for reporting performance and uncertainty metrics.https://journals.plos.org/digitalhealth/article/file?id=10.1371/journal.pdig.0000085&type=printable
spellingShingle Tyler J Loftus
Benjamin Shickel
Matthew M Ruppert
Jeremy A Balch
Tezcan Ozrazgat-Baslanti
Patrick J Tighe
Philip A Efron
William R Hogan
Parisa Rashidi
Gilbert R Upchurch
Azra Bihorac
Uncertainty-aware deep learning in healthcare: A scoping review.
PLOS Digital Health
title Uncertainty-aware deep learning in healthcare: A scoping review.
title_full Uncertainty-aware deep learning in healthcare: A scoping review.
title_fullStr Uncertainty-aware deep learning in healthcare: A scoping review.
title_full_unstemmed Uncertainty-aware deep learning in healthcare: A scoping review.
title_short Uncertainty-aware deep learning in healthcare: A scoping review.
title_sort uncertainty aware deep learning in healthcare a scoping review
url https://journals.plos.org/digitalhealth/article/file?id=10.1371/journal.pdig.0000085&type=printable
work_keys_str_mv AT tylerjloftus uncertaintyawaredeeplearninginhealthcareascopingreview
AT benjaminshickel uncertaintyawaredeeplearninginhealthcareascopingreview
AT matthewmruppert uncertaintyawaredeeplearninginhealthcareascopingreview
AT jeremyabalch uncertaintyawaredeeplearninginhealthcareascopingreview
AT tezcanozrazgatbaslanti uncertaintyawaredeeplearninginhealthcareascopingreview
AT patrickjtighe uncertaintyawaredeeplearninginhealthcareascopingreview
AT philipaefron uncertaintyawaredeeplearninginhealthcareascopingreview
AT williamrhogan uncertaintyawaredeeplearninginhealthcareascopingreview
AT parisarashidi uncertaintyawaredeeplearninginhealthcareascopingreview
AT gilbertrupchurch uncertaintyawaredeeplearninginhealthcareascopingreview
AT azrabihorac uncertaintyawaredeeplearninginhealthcareascopingreview