Development and evaluation of a deep learning framework for pelvic and sacral tumor segmentation from multi-sequence MRI: a retrospective study

Abstract Background Accurate segmentation of pelvic and sacral tumors (PSTs) in multi-sequence magnetic resonance imaging (MRI) is essential for effective treatment and surgical planning. Purpose To develop a deep learning (DL) framework for efficient segmentation of PSTs from multi-sequence MRI. Ma...

Full description

Saved in:
Bibliographic Details
Main Authors: Ping Yin, Weidao Chen, Qianrui Fan, Ruize Yu, Xia Liu, Tao Liu, Dawei Wang, Nan Hong
Format: Article
Language:English
Published: BMC 2025-03-01
Series:Cancer Imaging
Subjects:
Online Access:https://doi.org/10.1186/s40644-025-00850-8
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Background Accurate segmentation of pelvic and sacral tumors (PSTs) in multi-sequence magnetic resonance imaging (MRI) is essential for effective treatment and surgical planning. Purpose To develop a deep learning (DL) framework for efficient segmentation of PSTs from multi-sequence MRI. Materials and methods This study included a total of 616 patients with pathologically confirmed PSTs between April 2011 to May 2022. We proposed a practical DL framework that integrates a 2.5D U-net and MobileNetV2 for automatic PST segmentation with a fast annotation strategy across multiple MRI sequences, including T1-weighted (T1-w), T2-weighted (T2-w), diffusion-weighted imaging (DWI), and contrast-enhanced T1-weighted (CET1-w). Two distinct models, the All-sequence segmentation model and the T2-fusion segmentation model, were developed. During the implementation of our DL models, all regions of interest (ROIs) in the training set were coarse labeled, and ROIs in the test set were fine labeled. Dice score and intersection over union (IoU) were used to evaluate model performance. Results The 2.5D MobileNetV2 architecture demonstrated improved segmentation performance compared to 2D and 3D U-Net models, with a Dice score of 0.741 and an IoU of 0.615. The All-sequence model, which was trained using a fusion of four MRI sequences (T1-w, CET1-w, T2-w, and DWI), exhibited superior performance with Dice scores of 0.659 for T1-w, 0.763 for CET1-w, 0.819 for T2-w, and 0.723 for DWI as inputs. In contrast, the T2-fusion segmentation model, which used T2-w and CET1-w sequences as inputs, achieved a Dice score of 0.833 and an IoU value of 0.719. Conclusions In this study, we developed a practical DL framework for PST segmentation via multi-sequence MRI, which reduces the dependence on data annotation. These models offer solutions for various clinical scenarios and have significant potential for wide-ranging applications.
ISSN:1470-7330