A Multimodal Convolutional Neural Network Based Approach for DICOM Files Classification. , 2025(1), e70107.

In this study, we developed a convolutional neural network approach for directly classifying digital imaging and communication in medicine files in medical imaging applications. Existing models require converting this format into other formats like portable network graphics. This conversion leads to...

Full description

Saved in:
Bibliographic Details
Main Authors: Mabirizi, Vicent, Wasswa, William, Kawuma, Simon
Format: Article
Language:English
Published: wiley 2025
Subjects:
Online Access:http://hdl.handle.net/20.500.12493/2925
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1841524063496830976
author Mabirizi, Vicent
Wasswa, William
Kawuma, Simon
author_facet Mabirizi, Vicent
Wasswa, William
Kawuma, Simon
author_sort Mabirizi, Vicent
collection KAB-DR
description In this study, we developed a convolutional neural network approach for directly classifying digital imaging and communication in medicine files in medical imaging applications. Existing models require converting this format into other formats like portable network graphics. This conversion leads to metadata loss and classification bias, the developed model processes raw digital imaging and communication in medicine files, thereby preserving both pixel data and embedded metadata. The model was evaluated on chest X-ray images for tuberculosis detection and magnetic resonance imaging scan images for brain tumour classification from the National Institute of Allergy and Infectious Diseases. The X-ray modality achieved a precision of 92.9%, recall of 88.4%, F1-score of 90.6% and accuracy of 90.9%, while the magnetic resonance imaging modality obtained a precision of 80.0%, recall of 79.4%, F1-score of 79.7% and accuracy of 85.5%. These results demonstrate the model’s effectiveness across multiple imaging modalities. A key advantage of this approach is the preservation of diagnostic metadata, enhancing accuracy and reducing classification bias. The study highlights its potential to improve medical imaging and support real-time clinical decision making. Despite the promising results, the study acknowledges limitations in dataset diversity and computational efficiency, with future work focusing on addressing these challenges and further optimising the model for deployment in resource-limited environments.
format Article
id oai:idr.kab.ac.ug:20.500.12493-2925
institution KAB-DR
language English
publishDate 2025
publisher wiley
record_format dspace
spelling oai:idr.kab.ac.ug:20.500.12493-29252025-07-19T00:00:28Z A Multimodal Convolutional Neural Network Based Approach for DICOM Files Classification. , 2025(1), e70107. Mabirizi, Vicent Wasswa, William Kawuma, Simon Multimodal convolutional neural network (CNN) DICOM file classification Raw DICOM processing Metadata preservation Image pixel data and metadata fusion In this study, we developed a convolutional neural network approach for directly classifying digital imaging and communication in medicine files in medical imaging applications. Existing models require converting this format into other formats like portable network graphics. This conversion leads to metadata loss and classification bias, the developed model processes raw digital imaging and communication in medicine files, thereby preserving both pixel data and embedded metadata. The model was evaluated on chest X-ray images for tuberculosis detection and magnetic resonance imaging scan images for brain tumour classification from the National Institute of Allergy and Infectious Diseases. The X-ray modality achieved a precision of 92.9%, recall of 88.4%, F1-score of 90.6% and accuracy of 90.9%, while the magnetic resonance imaging modality obtained a precision of 80.0%, recall of 79.4%, F1-score of 79.7% and accuracy of 85.5%. These results demonstrate the model’s effectiveness across multiple imaging modalities. A key advantage of this approach is the preservation of diagnostic metadata, enhancing accuracy and reducing classification bias. The study highlights its potential to improve medical imaging and support real-time clinical decision making. Despite the promising results, the study acknowledges limitations in dataset diversity and computational efficiency, with future work focusing on addressing these challenges and further optimising the model for deployment in resource-limited environments. 2025-07-18T08:34:14Z 2025-07-18T08:34:14Z 2025 Article Vicent, M., Willian, W., & Simon, K. (2025). A Multimodal Convolutional Neural Network Based Approach for DICOM Files Classification. The Journal of Engineering, 2025(1), e70107. https://doi.org/10.1049/tje2.70107 http://hdl.handle.net/20.500.12493/2925 en Attribution 3.0 United States http://creativecommons.org/licenses/by/3.0/us/ application/pdf wiley
spellingShingle Multimodal convolutional neural network (CNN)
DICOM file classification
Raw DICOM processing
Metadata preservation
Image pixel data and metadata fusion
Mabirizi, Vicent
Wasswa, William
Kawuma, Simon
A Multimodal Convolutional Neural Network Based Approach for DICOM Files Classification. , 2025(1), e70107.
title A Multimodal Convolutional Neural Network Based Approach for DICOM Files Classification. , 2025(1), e70107.
title_full A Multimodal Convolutional Neural Network Based Approach for DICOM Files Classification. , 2025(1), e70107.
title_fullStr A Multimodal Convolutional Neural Network Based Approach for DICOM Files Classification. , 2025(1), e70107.
title_full_unstemmed A Multimodal Convolutional Neural Network Based Approach for DICOM Files Classification. , 2025(1), e70107.
title_short A Multimodal Convolutional Neural Network Based Approach for DICOM Files Classification. , 2025(1), e70107.
title_sort multimodal convolutional neural network based approach for dicom files classification 2025 1 e70107
topic Multimodal convolutional neural network (CNN)
DICOM file classification
Raw DICOM processing
Metadata preservation
Image pixel data and metadata fusion
url http://hdl.handle.net/20.500.12493/2925
work_keys_str_mv AT mabirizivicent amultimodalconvolutionalneuralnetworkbasedapproachfordicomfilesclassification20251e70107
AT wasswawilliam amultimodalconvolutionalneuralnetworkbasedapproachfordicomfilesclassification20251e70107
AT kawumasimon amultimodalconvolutionalneuralnetworkbasedapproachfordicomfilesclassification20251e70107
AT mabirizivicent multimodalconvolutionalneuralnetworkbasedapproachfordicomfilesclassification20251e70107
AT wasswawilliam multimodalconvolutionalneuralnetworkbasedapproachfordicomfilesclassification20251e70107
AT kawumasimon multimodalconvolutionalneuralnetworkbasedapproachfordicomfilesclassification20251e70107