Enhancing furcation involvement classification on panoramic radiographs with vision transformers

Abstract Background The severity of furcation involvement (FI) directly affected tooth prognosis and influenced treatment approaches. However, assessing, diagnosing, and treating molars with FI was complicated by anatomical and morphological variations. Cone-beam computed tomography (CBCT) enhanced...

Full description

Saved in:
Bibliographic Details
Main Authors: Xuan Zhang, Enting Guo, Xu Liu, Hong Zhao, Jie Yang, Wen Li, Wenlei Wu, Weibin Sun
Format: Article
Language:English
Published: BMC 2025-01-01
Series:BMC Oral Health
Subjects:
Online Access:https://doi.org/10.1186/s12903-025-05431-6
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832571240432271360
author Xuan Zhang
Enting Guo
Xu Liu
Hong Zhao
Jie Yang
Wen Li
Wenlei Wu
Weibin Sun
author_facet Xuan Zhang
Enting Guo
Xu Liu
Hong Zhao
Jie Yang
Wen Li
Wenlei Wu
Weibin Sun
author_sort Xuan Zhang
collection DOAJ
description Abstract Background The severity of furcation involvement (FI) directly affected tooth prognosis and influenced treatment approaches. However, assessing, diagnosing, and treating molars with FI was complicated by anatomical and morphological variations. Cone-beam computed tomography (CBCT) enhanced diagnostic accuracy for detecting FI and measuring furcation defects. Despite its advantages, the high cost and radiation dose associated with CBCT equipment limited its widespread use. The aim of this study was to evaluate the performance of the Vision Transformer (ViT) in comparison with several commonly used traditional deep learning (DL) models for classifying molars with or without FI on panoramic radiographs. Methods A total of 1,568 tooth images obtained from 506 panoramic radiographs were used to construct the database and evaluate the models. This study developed and assessed a ViT model for classifying FI from panoramic radiographs, and compared its performance with traditional models, including Multi-Layer Perceptron (MLP), Visual Geometry Group (VGG)Net, and GoogLeNet. Results Among the evaluated models, the ViT model outperformed all others, achieving the highest precision (0.98), recall (0.92), and F1 score (0.95), along with the lowest cross-entropy loss (0.27) and the highest accuracy (92%). ViT also recorded the highest area under the curve (AUC) (98%), outperforming the other models with statistically significant differences (p < 0.05), confirming its enhanced classification capability. The gradient-weighted class activation mapping (Grad-CAM) analysis on the ViT model revealed the key areas of the images that the model focused on during predictions. Conclusion DL algorithms can automatically classify FI using readily accessible panoramic images. These findings demonstrate that ViT outperforms the tested traditional models, highlighting the potential of transformer-based approaches to significantly advance image classification. This approach is also expected to reduce both the radiation dose and the financial burden on patients while simultaneously improving diagnostic precision.
format Article
id doaj-art-6cb30051e9b445a7b3483bbaff23b812
institution Kabale University
issn 1472-6831
language English
publishDate 2025-01-01
publisher BMC
record_format Article
series BMC Oral Health
spelling doaj-art-6cb30051e9b445a7b3483bbaff23b8122025-02-02T12:45:14ZengBMCBMC Oral Health1472-68312025-01-0125111410.1186/s12903-025-05431-6Enhancing furcation involvement classification on panoramic radiographs with vision transformersXuan Zhang0Enting Guo1Xu Liu2Hong Zhao3Jie Yang4Wen Li5Wenlei Wu6Weibin Sun7Department of Periodontics, Affiliated Hospital of Medical School, Nanjing Stomatological Hospital, Research Institute of Stomatology, Nanjing UniversityDivision of Computer Science, University of AizuDepartment of Periodontics, Affiliated Hospital of Medical School, Nanjing Stomatological Hospital, Research Institute of Stomatology, Nanjing UniversityThe School of Computer Science and Technology, North University of ChinaDepartment of Periodontics, Affiliated Hospital of Medical School, Nanjing Stomatological Hospital, Research Institute of Stomatology, Nanjing UniversityDepartment of Endodontics, Affiliated Hospital of Medical School, Nanjing Stomatological Hospital, Research Institute of Stomatology, Nanjing UniversityDepartment of Periodontics, Affiliated Hospital of Medical School, Nanjing Stomatological Hospital, Research Institute of Stomatology, Nanjing UniversityDepartment of Periodontics, Affiliated Hospital of Medical School, Nanjing Stomatological Hospital, Research Institute of Stomatology, Nanjing UniversityAbstract Background The severity of furcation involvement (FI) directly affected tooth prognosis and influenced treatment approaches. However, assessing, diagnosing, and treating molars with FI was complicated by anatomical and morphological variations. Cone-beam computed tomography (CBCT) enhanced diagnostic accuracy for detecting FI and measuring furcation defects. Despite its advantages, the high cost and radiation dose associated with CBCT equipment limited its widespread use. The aim of this study was to evaluate the performance of the Vision Transformer (ViT) in comparison with several commonly used traditional deep learning (DL) models for classifying molars with or without FI on panoramic radiographs. Methods A total of 1,568 tooth images obtained from 506 panoramic radiographs were used to construct the database and evaluate the models. This study developed and assessed a ViT model for classifying FI from panoramic radiographs, and compared its performance with traditional models, including Multi-Layer Perceptron (MLP), Visual Geometry Group (VGG)Net, and GoogLeNet. Results Among the evaluated models, the ViT model outperformed all others, achieving the highest precision (0.98), recall (0.92), and F1 score (0.95), along with the lowest cross-entropy loss (0.27) and the highest accuracy (92%). ViT also recorded the highest area under the curve (AUC) (98%), outperforming the other models with statistically significant differences (p < 0.05), confirming its enhanced classification capability. The gradient-weighted class activation mapping (Grad-CAM) analysis on the ViT model revealed the key areas of the images that the model focused on during predictions. Conclusion DL algorithms can automatically classify FI using readily accessible panoramic images. These findings demonstrate that ViT outperforms the tested traditional models, highlighting the potential of transformer-based approaches to significantly advance image classification. This approach is also expected to reduce both the radiation dose and the financial burden on patients while simultaneously improving diagnostic precision.https://doi.org/10.1186/s12903-025-05431-6Vision transformerFurcation involvementDeep learningPanoramic radiograph
spellingShingle Xuan Zhang
Enting Guo
Xu Liu
Hong Zhao
Jie Yang
Wen Li
Wenlei Wu
Weibin Sun
Enhancing furcation involvement classification on panoramic radiographs with vision transformers
BMC Oral Health
Vision transformer
Furcation involvement
Deep learning
Panoramic radiograph
title Enhancing furcation involvement classification on panoramic radiographs with vision transformers
title_full Enhancing furcation involvement classification on panoramic radiographs with vision transformers
title_fullStr Enhancing furcation involvement classification on panoramic radiographs with vision transformers
title_full_unstemmed Enhancing furcation involvement classification on panoramic radiographs with vision transformers
title_short Enhancing furcation involvement classification on panoramic radiographs with vision transformers
title_sort enhancing furcation involvement classification on panoramic radiographs with vision transformers
topic Vision transformer
Furcation involvement
Deep learning
Panoramic radiograph
url https://doi.org/10.1186/s12903-025-05431-6
work_keys_str_mv AT xuanzhang enhancingfurcationinvolvementclassificationonpanoramicradiographswithvisiontransformers
AT entingguo enhancingfurcationinvolvementclassificationonpanoramicradiographswithvisiontransformers
AT xuliu enhancingfurcationinvolvementclassificationonpanoramicradiographswithvisiontransformers
AT hongzhao enhancingfurcationinvolvementclassificationonpanoramicradiographswithvisiontransformers
AT jieyang enhancingfurcationinvolvementclassificationonpanoramicradiographswithvisiontransformers
AT wenli enhancingfurcationinvolvementclassificationonpanoramicradiographswithvisiontransformers
AT wenleiwu enhancingfurcationinvolvementclassificationonpanoramicradiographswithvisiontransformers
AT weibinsun enhancingfurcationinvolvementclassificationonpanoramicradiographswithvisiontransformers