The Automatic Joint Teeth Segmentation in Panoramic Dental Images using Mask Recurrent Convolutional Neural Networks with Residual Feature Extraction:

Introduction Panoramic dental images gives an in-depth understanding of the tooth structure, both lower and upper jaws, and surrounding structures throughout the cavity in our mouth.The Panoramic dental images provided have significance for dental diagnostics since they aid in the detection of an a...

Full description

Saved in:
Bibliographic Details
Main Authors: Raghavendra H. Bhalerao, Abhijeet Ashok Salunke, Shristi Sharan, Kamlesh Kumar, Priyank Rathod, Prince Kumar, Manish Chaturvedi, Nandlal Bharwani, Krupa Shah, Dhruv Patel, Keval Patel, Vikas Warikoo, Manisha Abhijeet Salunke, Shashank Pandya
Format: Article
Language:English
Published: SJORANM GmbH (Ltd.) 2024-09-01
Series:Swiss Journal of Radiology and Nuclear Medicine
Subjects:
Online Access:https://sjoranm.com/sjoranm/article/view/43
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Introduction Panoramic dental images gives an in-depth understanding of the tooth structure, both lower and upper jaws, and surrounding structures throughout the cavity in our mouth.The Panoramic dental images provided have significance for dental diagnostics since they aid in the detection of an array of dental disorders, including oral cancer. We propose a novel approach to automatic joint teeth segmentation using the pioneer Mask Recurrent Convolutional Neural Network (MRCNN) model for dental image segmentation. Material and Methods In this study, a sequence of residual blocks are used to construct a 62-layer feature extraction network in lieu of ResNet50/101 in MRCNN. To evaluate the efficacy of our method, the UFBA-UESC and Tufts dental image dataset (2500 panoramic dental x-rays) were utilised. 252 x-rays were used in test set, rest of the x-rays were utilised as training(1800 images) and validation datasets(448images) in ratio of 8:2 of the modified MRCNN model. Results Modified MRCNN achieved the final training and validation accuracies as 99.67% and 98.94%, respectively.The achieved accuracy of Dice coefficient (97.8%), Intersection over Union, (98.67%), and Pixel Accuracy(96.53%) respectively over the whole dataset. We also compare the performance of proposed model and other well established networks such as FPN, UNet, PSPNet, and DeepLabV3. The Modified MRCNN provides better results segmenting any two teeth which are close. Conclusion Our proposed method will serve as a valuable tool for automatic segmentation of individual teeth for medical management. This current method leads to higher accuracy and precision. Segmented images can be used to evaluate periodic changes, providing valuable data for assessing the progression of oral cancer and the efficacy of management.Future research should focus on developing  less complex, lightweight, and faster vision models while maintaining high accuracy.
ISSN:2813-7221