Mixed reality infrastructure based on deep learning medical image segmentation and 3D visualization for bone tumors using DCU-Net
Objective: Segmenting and reconstructing 3D models of bone tumors from 2D image data is of great significance for assisting disease diagnosis and treatment. However, due to the low distinguishability of tumors and surrounding tissues in images, existing methods lack accuracy and stability. This stud...
Saved in:
Main Authors: | , , , , , , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Elsevier
2025-02-01
|
Series: | Journal of Bone Oncology |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2212137424001349 |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
_version_ | 1832594153840574464 |
---|---|
author | Kun Wang Yong Han Yuguang Ye Yusi Chen Daxin Zhu Yifeng Huang Ying Huang Yijie Chen Jianshe Shi Bijiao Ding Jianlong Huang |
author_facet | Kun Wang Yong Han Yuguang Ye Yusi Chen Daxin Zhu Yifeng Huang Ying Huang Yijie Chen Jianshe Shi Bijiao Ding Jianlong Huang |
author_sort | Kun Wang |
collection | DOAJ |
description | Objective: Segmenting and reconstructing 3D models of bone tumors from 2D image data is of great significance for assisting disease diagnosis and treatment. However, due to the low distinguishability of tumors and surrounding tissues in images, existing methods lack accuracy and stability. This study proposes a U-Net model based on double dimensionality reduction and channel attention gating mechanism, namely the DCU-Net model for oncological image segmentation. After realizing automatic segmentation and 3D reconstruction of osteosarcoma by optimizing feature extraction and improving target space clustering capabilities, we built a mixed reality (MR) infrastructure and explored the application prospects of the infrastructure combining deep learning-based medical image segmentation and mixed reality in the diagnosis and treatment of bone tumors. Methods: We conducted experiments using a hospital dataset for bone tumor segmentation, used the optimized DCU-Net and 3D reconstruction technology to generate bone tumor models, and used set similarity (DSC), recall (R), precision (P), and 3D vertex distance error (VDE) to evaluate segmentation performance and 3D reconstruction effects. Then, two surgeons conducted clinical examination experiments on patients using two different methods, viewing 2D images and virtual reality infrastructure, and used the Likert scale (LS) to compare the effectiveness of surgical plans of the two methods. Results: The DSC, R and P values of the model introduced in this paper all exceed 90%, which has significant advantages compared with methods such as U-Net and Attention-Uet. Furthermore, LS showed that clinicians in the DCU-Net-based MR group had better spatial awareness of tumor preoperative planning. Conclusion: The deep learning DCU-Net algorithm model can improve the performance of tumor CT image segmentation, and the reconstructed fine model can better reflect the actual situation of individual tumors; the MR system constructed based on this model enhances clinicians’ understanding of tumor morphology and spatial relationships. The MR system based on deep learning and three-dimensional visualization technology has great potential in the diagnosis and treatment of bone tumors, and is expected to promote clinical practice and improve efficacy. |
format | Article |
id | doaj-art-8d09b74e287b442ea416fcabaef4c79d |
institution | Kabale University |
issn | 2212-1374 |
language | English |
publishDate | 2025-02-01 |
publisher | Elsevier |
record_format | Article |
series | Journal of Bone Oncology |
spelling | doaj-art-8d09b74e287b442ea416fcabaef4c79d2025-01-20T04:17:25ZengElsevierJournal of Bone Oncology2212-13742025-02-0150100654Mixed reality infrastructure based on deep learning medical image segmentation and 3D visualization for bone tumors using DCU-NetKun Wang0Yong Han1Yuguang Ye2Yusi Chen3Daxin Zhu4Yifeng Huang5Ying Huang6Yijie Chen7Jianshe Shi8Bijiao Ding9Jianlong Huang10Institute of Design, Quanzhou Normal University, Quanzhou 362000, ChinaSchool of Design, Quanzhou University of Information Engineering, Quanzhou, Fujian 362000, ChinaSchool of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou, 362001, China; Fujian Provincial Key Laboratory of Data-Intensive Computing, Quanzhou Normal University, Quanzhou, 362001, China; Key Laboratory of Intelligent Computing and Information Processing (Quanzhou Normal University), Fujian Province University, Quanzhou, 362001, ChinaSchool of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou, 362001, China; Fujian Provincial Key Laboratory of Data-Intensive Computing, Quanzhou Normal University, Quanzhou, 362001, China; Key Laboratory of Intelligent Computing and Information Processing (Quanzhou Normal University), Fujian Province University, Quanzhou, 362001, ChinaSchool of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou, 362001, China; Fujian Provincial Key Laboratory of Data-Intensive Computing, Quanzhou Normal University, Quanzhou, 362001, China; Key Laboratory of Intelligent Computing and Information Processing (Quanzhou Normal University), Fujian Province University, Quanzhou, 362001, ChinaDepartment of Diagnostic Radiology, Huaqiao University Affliated Strait Hospital, Quanzhou, Fujian 362000, ChinaDepartment of Diagnostic Radiology, Huaqiao University Affliated Strait Hospital, Quanzhou, Fujian 362000, ChinaDepartment of General Surgery, Huaqiao University Affliated Strait Hospital, Quanzhou, Fujian 362000, ChinaDepartment of General Surgery, Huaqiao University Affliated Strait Hospital, Quanzhou, Fujian 362000, ChinaDepartment of Diagnostic Radiology, Huaqiao University Affliated Strait Hospital, Quanzhou, Fujian 362000, ChinaSchool of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou, 362001, China; Fujian Provincial Key Laboratory of Data-Intensive Computing, Quanzhou Normal University, Quanzhou, 362001, China; Key Laboratory of Intelligent Computing and Information Processing (Quanzhou Normal University), Fujian Province University, Quanzhou, 362001, China; Corresponding author at: School of Mathematics and Computer Science, Quanzhou Normal University, Quanzhou, 362001, China.Objective: Segmenting and reconstructing 3D models of bone tumors from 2D image data is of great significance for assisting disease diagnosis and treatment. However, due to the low distinguishability of tumors and surrounding tissues in images, existing methods lack accuracy and stability. This study proposes a U-Net model based on double dimensionality reduction and channel attention gating mechanism, namely the DCU-Net model for oncological image segmentation. After realizing automatic segmentation and 3D reconstruction of osteosarcoma by optimizing feature extraction and improving target space clustering capabilities, we built a mixed reality (MR) infrastructure and explored the application prospects of the infrastructure combining deep learning-based medical image segmentation and mixed reality in the diagnosis and treatment of bone tumors. Methods: We conducted experiments using a hospital dataset for bone tumor segmentation, used the optimized DCU-Net and 3D reconstruction technology to generate bone tumor models, and used set similarity (DSC), recall (R), precision (P), and 3D vertex distance error (VDE) to evaluate segmentation performance and 3D reconstruction effects. Then, two surgeons conducted clinical examination experiments on patients using two different methods, viewing 2D images and virtual reality infrastructure, and used the Likert scale (LS) to compare the effectiveness of surgical plans of the two methods. Results: The DSC, R and P values of the model introduced in this paper all exceed 90%, which has significant advantages compared with methods such as U-Net and Attention-Uet. Furthermore, LS showed that clinicians in the DCU-Net-based MR group had better spatial awareness of tumor preoperative planning. Conclusion: The deep learning DCU-Net algorithm model can improve the performance of tumor CT image segmentation, and the reconstructed fine model can better reflect the actual situation of individual tumors; the MR system constructed based on this model enhances clinicians’ understanding of tumor morphology and spatial relationships. The MR system based on deep learning and three-dimensional visualization technology has great potential in the diagnosis and treatment of bone tumors, and is expected to promote clinical practice and improve efficacy.http://www.sciencedirect.com/science/article/pii/S2212137424001349Image segmentationDCU-Net model3D visualizationBone tumor diagnosisMixed reality |
spellingShingle | Kun Wang Yong Han Yuguang Ye Yusi Chen Daxin Zhu Yifeng Huang Ying Huang Yijie Chen Jianshe Shi Bijiao Ding Jianlong Huang Mixed reality infrastructure based on deep learning medical image segmentation and 3D visualization for bone tumors using DCU-Net Journal of Bone Oncology Image segmentation DCU-Net model 3D visualization Bone tumor diagnosis Mixed reality |
title | Mixed reality infrastructure based on deep learning medical image segmentation and 3D visualization for bone tumors using DCU-Net |
title_full | Mixed reality infrastructure based on deep learning medical image segmentation and 3D visualization for bone tumors using DCU-Net |
title_fullStr | Mixed reality infrastructure based on deep learning medical image segmentation and 3D visualization for bone tumors using DCU-Net |
title_full_unstemmed | Mixed reality infrastructure based on deep learning medical image segmentation and 3D visualization for bone tumors using DCU-Net |
title_short | Mixed reality infrastructure based on deep learning medical image segmentation and 3D visualization for bone tumors using DCU-Net |
title_sort | mixed reality infrastructure based on deep learning medical image segmentation and 3d visualization for bone tumors using dcu net |
topic | Image segmentation DCU-Net model 3D visualization Bone tumor diagnosis Mixed reality |
url | http://www.sciencedirect.com/science/article/pii/S2212137424001349 |
work_keys_str_mv | AT kunwang mixedrealityinfrastructurebasedondeeplearningmedicalimagesegmentationand3dvisualizationforbonetumorsusingdcunet AT yonghan mixedrealityinfrastructurebasedondeeplearningmedicalimagesegmentationand3dvisualizationforbonetumorsusingdcunet AT yuguangye mixedrealityinfrastructurebasedondeeplearningmedicalimagesegmentationand3dvisualizationforbonetumorsusingdcunet AT yusichen mixedrealityinfrastructurebasedondeeplearningmedicalimagesegmentationand3dvisualizationforbonetumorsusingdcunet AT daxinzhu mixedrealityinfrastructurebasedondeeplearningmedicalimagesegmentationand3dvisualizationforbonetumorsusingdcunet AT yifenghuang mixedrealityinfrastructurebasedondeeplearningmedicalimagesegmentationand3dvisualizationforbonetumorsusingdcunet AT yinghuang mixedrealityinfrastructurebasedondeeplearningmedicalimagesegmentationand3dvisualizationforbonetumorsusingdcunet AT yijiechen mixedrealityinfrastructurebasedondeeplearningmedicalimagesegmentationand3dvisualizationforbonetumorsusingdcunet AT jiansheshi mixedrealityinfrastructurebasedondeeplearningmedicalimagesegmentationand3dvisualizationforbonetumorsusingdcunet AT bijiaoding mixedrealityinfrastructurebasedondeeplearningmedicalimagesegmentationand3dvisualizationforbonetumorsusingdcunet AT jianlonghuang mixedrealityinfrastructurebasedondeeplearningmedicalimagesegmentationand3dvisualizationforbonetumorsusingdcunet |