EFNet: estimation of left ventricular ejection fraction from cardiac ultrasound videos using deep learning

The ejection fraction (EF) is a vital metric for assessing cardiovascular function through cardiac ultrasound. Manual evaluation is time-consuming and exhibits high variability among observers. Deep-learning techniques offer precise and autonomous EF predictions, yet these methods often lack explain...

Full description

Saved in:
Bibliographic Details
Main Authors: Waqas Ali, Wesam Alsabban, Muhammad Shahbaz, Ali Al-Laith, Bassam Almogadwy
Format: Article
Language:English
Published: PeerJ Inc. 2025-01-01
Series:PeerJ Computer Science
Subjects:
Online Access:https://peerj.com/articles/cs-2506.pdf
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832590507940773888
author Waqas Ali
Wesam Alsabban
Muhammad Shahbaz
Ali Al-Laith
Bassam Almogadwy
author_facet Waqas Ali
Wesam Alsabban
Muhammad Shahbaz
Ali Al-Laith
Bassam Almogadwy
author_sort Waqas Ali
collection DOAJ
description The ejection fraction (EF) is a vital metric for assessing cardiovascular function through cardiac ultrasound. Manual evaluation is time-consuming and exhibits high variability among observers. Deep-learning techniques offer precise and autonomous EF predictions, yet these methods often lack explainability. Accurate heart failure prediction using cardiac ultrasound is challenging due to operator dependency and inconsistent video quality, resulting in significant interobserver variability. To address this, we developed a method integrating convolutional neural networks (CNN) and transformer models for direct EF estimation from ultrasound video scans. This article introduces a Residual Transformer Module (RTM) that extends a 3D ResNet-based network to analyze (2D + t) spatiotemporal cardiac ultrasound video scans. The proposed method, EFNet, utilizes cardiac ultrasound video images for end-to-end EF value prediction. Performance evaluation on the EchoNet-Dynamic dataset yielded a mean absolute error (MAE) of 3.7 and an R2 score of 0.82. Experimental results demonstrate that EFNet outperforms state-of-the-art techniques, providing accurate EF predictions.
format Article
id doaj-art-f5cf8b74a1ff4accaf25bd7990e91552
institution Kabale University
issn 2376-5992
language English
publishDate 2025-01-01
publisher PeerJ Inc.
record_format Article
series PeerJ Computer Science
spelling doaj-art-f5cf8b74a1ff4accaf25bd7990e915522025-01-23T15:05:06ZengPeerJ Inc.PeerJ Computer Science2376-59922025-01-0111e250610.7717/peerj-cs.2506EFNet: estimation of left ventricular ejection fraction from cardiac ultrasound videos using deep learningWaqas Ali0Wesam Alsabban1Muhammad Shahbaz2Ali Al-Laith3Bassam Almogadwy4Computer Science Department, University of Engineering and Technology, Lahore, PakistanDepartment of Computer and Network Engineering, College of Computing, Umm Al-Qura University, Makkah, Saudi ArabiaComputer Science Department, University of Engineering and Technology, Lahore, PakistanComputer Science Department, University of Copenhagen, Copenhagen, DenmarkDepartment of Artificial Intelligence and Data Science, Taibah University, Medina, Saudi ArabiaThe ejection fraction (EF) is a vital metric for assessing cardiovascular function through cardiac ultrasound. Manual evaluation is time-consuming and exhibits high variability among observers. Deep-learning techniques offer precise and autonomous EF predictions, yet these methods often lack explainability. Accurate heart failure prediction using cardiac ultrasound is challenging due to operator dependency and inconsistent video quality, resulting in significant interobserver variability. To address this, we developed a method integrating convolutional neural networks (CNN) and transformer models for direct EF estimation from ultrasound video scans. This article introduces a Residual Transformer Module (RTM) that extends a 3D ResNet-based network to analyze (2D + t) spatiotemporal cardiac ultrasound video scans. The proposed method, EFNet, utilizes cardiac ultrasound video images for end-to-end EF value prediction. Performance evaluation on the EchoNet-Dynamic dataset yielded a mean absolute error (MAE) of 3.7 and an R2 score of 0.82. Experimental results demonstrate that EFNet outperforms state-of-the-art techniques, providing accurate EF predictions.https://peerj.com/articles/cs-2506.pdfMedical imagingEchocardiographyCNNTransformersHeart disease
spellingShingle Waqas Ali
Wesam Alsabban
Muhammad Shahbaz
Ali Al-Laith
Bassam Almogadwy
EFNet: estimation of left ventricular ejection fraction from cardiac ultrasound videos using deep learning
PeerJ Computer Science
Medical imaging
Echocardiography
CNN
Transformers
Heart disease
title EFNet: estimation of left ventricular ejection fraction from cardiac ultrasound videos using deep learning
title_full EFNet: estimation of left ventricular ejection fraction from cardiac ultrasound videos using deep learning
title_fullStr EFNet: estimation of left ventricular ejection fraction from cardiac ultrasound videos using deep learning
title_full_unstemmed EFNet: estimation of left ventricular ejection fraction from cardiac ultrasound videos using deep learning
title_short EFNet: estimation of left ventricular ejection fraction from cardiac ultrasound videos using deep learning
title_sort efnet estimation of left ventricular ejection fraction from cardiac ultrasound videos using deep learning
topic Medical imaging
Echocardiography
CNN
Transformers
Heart disease
url https://peerj.com/articles/cs-2506.pdf
work_keys_str_mv AT waqasali efnetestimationofleftventricularejectionfractionfromcardiacultrasoundvideosusingdeeplearning
AT wesamalsabban efnetestimationofleftventricularejectionfractionfromcardiacultrasoundvideosusingdeeplearning
AT muhammadshahbaz efnetestimationofleftventricularejectionfractionfromcardiacultrasoundvideosusingdeeplearning
AT aliallaith efnetestimationofleftventricularejectionfractionfromcardiacultrasoundvideosusingdeeplearning
AT bassamalmogadwy efnetestimationofleftventricularejectionfractionfromcardiacultrasoundvideosusingdeeplearning