Soft-Label Supervised Meta-Model with Adversarial Samples for Uncertainty Quantification

Despite the recent success of deep-learning models, traditional models are overconfident and poorly calibrated. This poses a serious problem when applied to high-stakes applications. To solve this issue, uncertainty quantification (UQ) models have been developed to allow the detection of misclassifi...

Full description

Saved in:
Bibliographic Details
Main Authors: Kyle Lucke, Aleksandar Vakanski, Min Xian
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:Computers
Subjects:
Online Access:https://www.mdpi.com/2073-431X/14/1/12
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832588770182955008
author Kyle Lucke
Aleksandar Vakanski
Min Xian
author_facet Kyle Lucke
Aleksandar Vakanski
Min Xian
author_sort Kyle Lucke
collection DOAJ
description Despite the recent success of deep-learning models, traditional models are overconfident and poorly calibrated. This poses a serious problem when applied to high-stakes applications. To solve this issue, uncertainty quantification (UQ) models have been developed to allow the detection of misclassifications. Meta-model-based UQ methods are promising due to the lack of predictive model re-training and low resource requirement. However, there are still several issues present in the training process. (1) Most current meta-models are trained using hard labels that do not allow quantification of the uncertainty associated with a given data sample; and (2) in most cases, the base model has a high test accuracy. Therefore, the samples used to train the meta-model primarily consist of correctly classified samples. This leads the meta-model to learn a poor approximation of the true decision boundary. To address these problems, we propose a novel soft-label formulation that better differentiates between correct and incorrect classifications, thereby allowing the meta-model to distinguish between correct and incorrect classifications with high uncertainty (i.e., low confidence). In addition, a novel training framework using adversarial samples is proposed to explore the decision boundary of the base model and mitigate issues related to training datasets with label imbalance. To validate the effectiveness of our approach, we use two predictive models trained on SVHN and CIFAR10 and evaluate performance according to sensitivity, specificity, an F1-score-style metric, average precision, and the Area Under the Receiver Operating Characteristic curve. We find the soft-label approach can significantly increase the model’s sensitivity and specificity, while the training with adversarial samples can noticeably improve the balance between sensitivity and specificity. We also compare our method against four state-of-the-art meta-model-based UQ methods, where we achieve significantly better performance than most models.
format Article
id doaj-art-6253d826370a4cbe9db96a5b9ff969a5
institution Kabale University
issn 2073-431X
language English
publishDate 2025-01-01
publisher MDPI AG
record_format Article
series Computers
spelling doaj-art-6253d826370a4cbe9db96a5b9ff969a52025-01-24T13:27:52ZengMDPI AGComputers2073-431X2025-01-011411210.3390/computers14010012Soft-Label Supervised Meta-Model with Adversarial Samples for Uncertainty QuantificationKyle Lucke0Aleksandar Vakanski1Min Xian2Department of Computer Science, University of Idaho, Idaho Falls, ID 83402, USADepartment of Computer Science, University of Idaho, Idaho Falls, ID 83402, USADepartment of Computer Science, University of Idaho, Idaho Falls, ID 83402, USADespite the recent success of deep-learning models, traditional models are overconfident and poorly calibrated. This poses a serious problem when applied to high-stakes applications. To solve this issue, uncertainty quantification (UQ) models have been developed to allow the detection of misclassifications. Meta-model-based UQ methods are promising due to the lack of predictive model re-training and low resource requirement. However, there are still several issues present in the training process. (1) Most current meta-models are trained using hard labels that do not allow quantification of the uncertainty associated with a given data sample; and (2) in most cases, the base model has a high test accuracy. Therefore, the samples used to train the meta-model primarily consist of correctly classified samples. This leads the meta-model to learn a poor approximation of the true decision boundary. To address these problems, we propose a novel soft-label formulation that better differentiates between correct and incorrect classifications, thereby allowing the meta-model to distinguish between correct and incorrect classifications with high uncertainty (i.e., low confidence). In addition, a novel training framework using adversarial samples is proposed to explore the decision boundary of the base model and mitigate issues related to training datasets with label imbalance. To validate the effectiveness of our approach, we use two predictive models trained on SVHN and CIFAR10 and evaluate performance according to sensitivity, specificity, an F1-score-style metric, average precision, and the Area Under the Receiver Operating Characteristic curve. We find the soft-label approach can significantly increase the model’s sensitivity and specificity, while the training with adversarial samples can noticeably improve the balance between sensitivity and specificity. We also compare our method against four state-of-the-art meta-model-based UQ methods, where we achieve significantly better performance than most models.https://www.mdpi.com/2073-431X/14/1/12uncertainty quantificationmisclassification detectionmeta-modeladversarial samples
spellingShingle Kyle Lucke
Aleksandar Vakanski
Min Xian
Soft-Label Supervised Meta-Model with Adversarial Samples for Uncertainty Quantification
Computers
uncertainty quantification
misclassification detection
meta-model
adversarial samples
title Soft-Label Supervised Meta-Model with Adversarial Samples for Uncertainty Quantification
title_full Soft-Label Supervised Meta-Model with Adversarial Samples for Uncertainty Quantification
title_fullStr Soft-Label Supervised Meta-Model with Adversarial Samples for Uncertainty Quantification
title_full_unstemmed Soft-Label Supervised Meta-Model with Adversarial Samples for Uncertainty Quantification
title_short Soft-Label Supervised Meta-Model with Adversarial Samples for Uncertainty Quantification
title_sort soft label supervised meta model with adversarial samples for uncertainty quantification
topic uncertainty quantification
misclassification detection
meta-model
adversarial samples
url https://www.mdpi.com/2073-431X/14/1/12
work_keys_str_mv AT kylelucke softlabelsupervisedmetamodelwithadversarialsamplesforuncertaintyquantification
AT aleksandarvakanski softlabelsupervisedmetamodelwithadversarialsamplesforuncertaintyquantification
AT minxian softlabelsupervisedmetamodelwithadversarialsamplesforuncertaintyquantification