Cross-Attention Fusion of Visual and Geometric Features for Large-Vocabulary Arabic Lipreading

Lipreading involves recognizing spoken words by analyzing the movements of the lips and surrounding area using visual data. It is an emerging research topic with many potential applications, such as human–machine interaction and enhancing audio-based speech recognition. Recent deep learning approach...

Full description

Saved in:
Bibliographic Details
Main Authors: Samar Daou, Achraf Ben-Hamadou, Ahmed Rekik, Abdelaziz Kallel
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:Technologies
Subjects:
Online Access:https://www.mdpi.com/2227-7080/13/1/26
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Lipreading involves recognizing spoken words by analyzing the movements of the lips and surrounding area using visual data. It is an emerging research topic with many potential applications, such as human–machine interaction and enhancing audio-based speech recognition. Recent deep learning approaches integrate visual features from the mouth region and lip contours. However, simple methods such as concatenation may not effectively optimize the feature vector. In this article, we propose extracting optimal visual features using 3D convolution blocks followed by a ResNet-18, while employing a graph neural network to extract geometric features from tracked lip landmarks. To fuse these complementary features, we introduce a cross-attention mechanism that combines visual and geometric information to obtain an optimal representation of lip movements for lipreading tasks. To validate our approach for Arabic, we introduce the first large-scale Lipreading in the Wild for Arabic (LRW-AR) dataset, consisting of 20,000 videos across 100 word classes, spoken by 36 speakers. Experimental results on both the LRW-AR and LRW datasets demonstrate the effectiveness of our approach, achieving accuracies of 85.85% and 89.41%, respectively.
ISSN:2227-7080