Authenticity at Risk: Key Factors in the Generation and Detection of Audio Deepfakes

Detecting audio deepfakes is crucial to ensure authenticity and security, especially in contexts where audio veracity can have critical implications, such as in the legal, security or human rights domains. Various elements, such as complex acoustic backgrounds, enhance the realism of deepfakes; howe...

Full description

Saved in:
Bibliographic Details
Main Authors: Alba Martínez-Serrano, Claudia Montero-Ramírez, Carmen Peláez-Moreno
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/2/558
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Detecting audio deepfakes is crucial to ensure authenticity and security, especially in contexts where audio veracity can have critical implications, such as in the legal, security or human rights domains. Various elements, such as complex acoustic backgrounds, enhance the realism of deepfakes; however, their effect on the processes of creation and detection of deepfakes remains under-explored. This study systematically analyses how factors such as the acoustic environment, user type and signal-to-noise ratio influence the quality and detectability of deepfakes. For this study, we use the <i>WELIVE</i> dataset, which contains audio recordings of 14 female victims of gender-based violence in real and uncontrolled environments. The results indicate that the complexity of the acoustic scene affects both the generation and detection of deepfakes: classifiers, particularly the linear SVM, are more effective in complex acoustic environments, suggesting that simpler acoustic environments may facilitate the generation of more realistic deepfakes and, in turn, make it more difficult for classifiers to detect them. These findings underscore the need to develop adaptive models capable of handling diverse acoustic environments, thus improving detection reliability in dynamic and real-world contexts.
ISSN:2076-3417