ViSwNeXtNet Deep Patch-Wise Ensemble of Vision Transformers and ConvNeXt for Robust Binary Histopathology Classification
<b>Background:</b> Intestinal metaplasia (IM) is a precancerous gastric condition that requires accurate histopathological diagnosis to enable early intervention and cancer prevention. Traditional evaluation of H&E-stained tissue slides can be labor-intensive and prone to interobserv...
Saved in:
| Main Authors: | , |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
MDPI AG
2025-06-01
|
| Series: | Diagnostics |
| Subjects: | |
| Online Access: | https://www.mdpi.com/2075-4418/15/12/1507 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
| Summary: | <b>Background:</b> Intestinal metaplasia (IM) is a precancerous gastric condition that requires accurate histopathological diagnosis to enable early intervention and cancer prevention. Traditional evaluation of H&E-stained tissue slides can be labor-intensive and prone to interobserver variability. Recent advances in deep learning, particularly transformer-based models, offer promising tools for improving diagnostic accuracy. <b>Methods:</b> We propose ViSwNeXtNet, a novel patch-wise ensemble framework that integrates three transformer-based architectures—ConvNeXt-Tiny, Swin-Tiny, and ViT-Base—for deep feature extraction. Features from each model (12,288 per model) were concatenated into a 36,864-dimensional vector and refined using iterative neighborhood component analysis (INCA) to select the most discriminative 565 features. A quadratic SVM classifier was trained using these selected features. The model was evaluated on two datasets: (1) a custom-collected dataset consisting of 516 intestinal metaplasia cases and 521 control cases, and (2) the public GasHisSDB dataset, which includes 20,160 normal and 13,124 abnormal H&E-stained image patches of size 160 × 160 pixels. <b>Results:</b> On the collected dataset, the proposed method achieved 94.41% accuracy, 94.63% sensitivity, and 94.40% F1 score. On the GasHisSDB dataset, it reached 99.20% accuracy, 99.39% sensitivity, and 99.16% F1 score, outperforming individual backbone models and demonstrating strong generalizability across datasets. <b>Conclusions:</b> ViSwNeXtNet successfully combines local, regional, and global representations of tissue structure through an ensemble of transformer-based models. The addition of INCA-based feature selection significantly enhances classification performance while reducing dimensionality. These findings suggest the method’s potential for integration into clinical pathology workflows. Future work will focus on multiclass classification, multicenter validation, and integration of explainable AI techniques. |
|---|---|
| ISSN: | 2075-4418 |