Deepfake Video Traceability and Authentication via Source Attribution

In recent years, deepfake videos have emerged as a significant threat to societal and cybersecurity landscapes. Artificial intelligence (AI) techniques are used to create convincing deepfakes. The main counter method is deepfake detection. Currently, most of the mainstream detectors are based on dee...

Full description

Saved in:
Bibliographic Details
Main Authors: Canghai Shi, Minglei Qiao, Zhuang Li, Zahid Akhtar, Bin Wang, Meng Han, Tong Qiao
Format: Article
Language:English
Published: Wiley 2025-01-01
Series:IET Biometrics
Online Access:http://dx.doi.org/10.1049/bme2/5687970
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In recent years, deepfake videos have emerged as a significant threat to societal and cybersecurity landscapes. Artificial intelligence (AI) techniques are used to create convincing deepfakes. The main counter method is deepfake detection. Currently, most of the mainstream detectors are based on deep neural networks. Such deep learning detection frameworks often face several problems that need to be addressed, for example, dependence on large-annotated datasets, lack of interpretability, and limited attention to source traceability. Towards overcoming these limitations, in this paper, we propose a novel training-free deepfake detection framework based on the interpretable inherent source attribution. The proposed framework not only distinguishes between real and fake videos but also traces their origins using camera fingerprints. Moreover, we have also constructed a new deepfake video dataset from 10 distinct camera devices. Experimental evaluations on multiple datasets show that the proposed method can attain high detection accuracies (ACCs) comparable to state-of-the-art (SOTA) deep learning techniques and also has superior traceability capabilities. This framework provides a robust and efficient solution for deepfake video authentication and source attribution, thus, making it highly adaptable to real-world scenarios.
ISSN:2047-4946