Comparison of discrete transforms for deep‐neural‐networks‐based speech enhancement
Abstract In recent studies of speech enhancement, a deep‐learning model is trained to predict clean speech spectra from the known noisy spectra of speech. Rather than using the traditional discrete Fourier transform (DFT), this paper considers other well‐known transforms to generate the speech spect...
Saved in:
| Main Authors: | Wissam A. Jassim, Naomi Harte |
|---|---|
| Format: | Article |
| Language: | English |
| Published: |
Wiley
2022-06-01
|
| Series: | IET Signal Processing |
| Subjects: | |
| Online Access: | https://doi.org/10.1049/sil2.12109 |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Speech Enhancement Based on Discrete Wavelet Packet Transform and Itakura-Saito Nonnegative Matrix Factorisation
by: Houguang LIU, et al.
Published: (2020-11-01) -
Deep Neural Network for Supervised Single-Channel Speech Enhancement
by: Nasir SALEEM, et al.
Published: (2019-01-01) -
Three-stage hybrid spiking neural networks fine-tuning for speech enhancement
by: Nidal Abuhajar, et al.
Published: (2025-04-01) -
DCT and Wiener filter based on approach for speech enhancement using a single microphone
by: OU Shi-feng, et al.
Published: (2006-01-01) -
Indonesian Voice Cloning Text-to-Speech System With Vall-E-Based Model and Speech Enhancement
by: Hizkia Raditya Pratama Roosadi, et al.
Published: (2024-01-01)