MRI-based deep learning with clinical and imaging features to differentiate medulloblastoma and ependymoma in children

BackgroundMedulloblastoma (MB) and ependymoma (EM) in children share similarities in terms of age group, tumor location, and clinical presentation, which makes it challenging to clinically diagnose and distinguish them.PurposeThe present study aims to explore the effectiveness of T2-weighted magneti...

Full description

Saved in:
Bibliographic Details
Main Authors: Yasen Yimit, Parhat Yasin, Yue Hao, Abudouresuli Tuersun, Chencui Huang, Xiaoguang Zou, Ya Qiu, Yunling Wang, Mayidili Nijiati
Format: Article
Language:English
Published: Frontiers Media S.A. 2025-04-01
Series:Frontiers in Molecular Biosciences
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fmolb.2025.1570860/full
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:BackgroundMedulloblastoma (MB) and ependymoma (EM) in children share similarities in terms of age group, tumor location, and clinical presentation, which makes it challenging to clinically diagnose and distinguish them.PurposeThe present study aims to explore the effectiveness of T2-weighted magnetic resonance imaging (MRI)-based deep learning (DL) combined with clinical imaging features for differentiating MB from EM.MethodsAxial T2-weighted MRI sequences obtained from 201 patients across three study centers were used for model training and testing. The regions of interest were manually delineated by an experienced neuroradiologist with supervision by a senior radiologist. We developed a DL classifier using a pretrained AlexNet architecture that was fine-tuned on our dataset. To mitigate class imbalance, we implemented data augmentation and employed K-fold cross-validation to enhance model generalizability. For patient classification, we used two voting strategies: hard voting strategy in which the majority prediction was selected from individual image slices; soft voting strategy in which the prediction scores were averaged across slices with a threshold of 0.5. Additionally, a multimodality fusion model was constructed by integrating the DL classifier with clinical and imaging features. The model performance was assessed using a 7:3 random split of the dataset for training and validation, respectively. The key metrics like sensitivity, specificity, positive predictive value, negative predictive value, F1 score, area under the receiver operating characteristic curve (AUC), and accuracy were calculated, and statistical comparisons were performed using the DeLong test. Thereafter, MB was classified as positive, while EM was classified as negative.ResultsThe DL model with the hard voting strategy achieved AUC values of 0.712 (95% confidence interval (CI): 0.625–0.797) on the training set and 0.689 (95% CI: 0.554–0.826) on the test set. In contrast, the multimodality fusion model demonstrated superior performance with AUC values of 0.987 (95% CI: 0.974–0.996) on the training set and 0.889 (95% CI: 0.803–0.949) on the test set. The DeLong test indicated a statistically significant improvement in AUC values for the fusion model compared to the DL model (p < 0.001), highlighting its enhanced discriminative ability.ConclusionT2-weighted MRI-based DL combined with multimodal clinical and imaging features can be used to effectively differentiate MB from EM in children. Thus, the structure of the decision tree in the decision tree classifier is expected to greatly assist clinicians in daily practice.
ISSN:2296-889X