When Are Two Tests Better Than One? Increasing the Accuracy of Binary Classification With Repetitive Testing

Abstract Repetitive testing models for binary classification (accept or reject) have been extensively investigated in semiconductor fabrication where devices may be tested numerous times until they are accepted or ultimately scrapped. There are situations, however, where the number of tests is limit...

Full description

Saved in:
Bibliographic Details
Main Author: Jamie R. Wieland
Format: Article
Language:English
Published: Springer 2025-03-01
Series:Journal of Statistical Theory and Applications (JSTA)
Subjects:
Online Access:https://doi.org/10.1007/s44199-025-00105-2
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Repetitive testing models for binary classification (accept or reject) have been extensively investigated in semiconductor fabrication where devices may be tested numerous times until they are accepted or ultimately scrapped. There are situations, however, where the number of tests is limited due to sample constraints or prohibitively high testing costs. Extant research in this domain assumes conditional independence between tests. In contrast, we propose a Markov model, allowing for dependency between consecutive tests, which is applied herein to situations with limited testing. Analysis of the proposed model reveals that assuming conditional independence, when tests are positively correlated, can inflate the probability of correct classification (PCC). The potential for inflating PCC raises concerns about the use of repetitive testing procedures in situations where they offer minimal or no practical benefit. This can be particularly detrimental in situations where repetitive testing is employed in an effort to increase classification accuracy. Our objective is to assess the impact of conducting two repetitive tests on the PCC in comparison to a single test. Conditions under which two tests increase the PCC are identified and discussed. Findings provide insight on the nuances of situations with limited testing, emphasizing that accuracy is highly contingent on how “ties” (conflicting test outcomes) are classified.
ISSN:2214-1766