Arabic speech recognition using end‐to‐end deep learning

Abstract Arabic automatic speech recognition (ASR) methods with diacritics have the ability to be integrated with other systems better than Arabic ASR methods without diacritics. In this work, the application of state‐of‐the‐art end‐to‐end deep learning approaches is investigated to build a robust d...

Full description

Saved in:
Bibliographic Details
Main Authors: Hamzah A. Alsayadi, Abdelaziz A. Abdelhamid, Islam Hegazy, Zaki T. Fayed
Format: Article
Language:English
Published: Wiley 2021-10-01
Series:IET Signal Processing
Online Access:https://doi.org/10.1049/sil2.12057
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract Arabic automatic speech recognition (ASR) methods with diacritics have the ability to be integrated with other systems better than Arabic ASR methods without diacritics. In this work, the application of state‐of‐the‐art end‐to‐end deep learning approaches is investigated to build a robust diacritised Arabic ASR. These approaches are based on the Mel‐Frequency Cepstral Coefficients and the log Mel‐Scale Filter Bank energies as acoustic features. To the best of our knowledge, end‐to‐end deep learning approach has not been used in the task of diacritised Arabic automatic speech recognition. To fill this gap, this work presents a new CTC‐based ASR, CNN‐LSTM, and an attention‐based end‐to‐end approach for improving diacritisedArabic ASR. In addition, a word‐based language model is employed to achieve better results. The end‐to‐end approaches applied in this work are based on state‐of‐the‐art frameworks, namely ESPnet and Espresso. Training and testing of these frameworks are performed based on the Standard Arabic Single Speaker Corpus (SASSC), which contains 7 h of modern standard Arabic speech. Experimental results show that the CNN‐LSTM with an attention framework outperforms conventional ASR and the Joint CTC‐attention ASR framework in the task of Arabic speech recognition. The CNN‐LSTM with an attention framework could achieve a word error rate better than conventional ASR and the Joint CTC‐attention ASR by 5.24% and 2.62%, respectively.
ISSN:1751-9675
1751-9683