Explainable AI for DeepFake Detection

The surge in technological advancements has resulted in concerns over its misuse in politics and entertainment, making reliable detection methods essential. This study introduces a deepfake detection technique that enhances interpretability using the network dissection algorithm. This research consi...

Full description

Saved in:
Bibliographic Details
Main Authors: Nazneen Mansoor, Alexander I. Iliev
Format: Article
Language:English
Published: MDPI AG 2025-01-01
Series:Applied Sciences
Subjects:
Online Access:https://www.mdpi.com/2076-3417/15/2/725
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1832589219611017216
author Nazneen Mansoor
Alexander I. Iliev
author_facet Nazneen Mansoor
Alexander I. Iliev
author_sort Nazneen Mansoor
collection DOAJ
description The surge in technological advancements has resulted in concerns over its misuse in politics and entertainment, making reliable detection methods essential. This study introduces a deepfake detection technique that enhances interpretability using the network dissection algorithm. This research consists of two stages: (1) detection of forged images using advanced convolutional neural networks such as ResNet-50, Inception V3, and VGG-16, and (2) applying the network dissection algorithm to understand the models’ internal decision-making processes. The CNNs’ performance is evaluated through F1-scores ranging from 0.8 to 0.9, demonstrating their effectiveness. By analyzing the facial features learned by the models, this study provides explainable results for classifying images as real or fake. This interpretability is crucial in understanding how deepfake detection models operate. Although numerous detection models exist, they often lack transparency in their decision-making processes. This research fills that gap by offering insights into how these models distinguish real from manipulated images. The findings highlight the importance of interpretability in deep neural networks, providing a better understanding of their hierarchical structures and decision processes.
format Article
id doaj-art-d09399d61bb247a4a92bfd38d3950bd5
institution Kabale University
issn 2076-3417
language English
publishDate 2025-01-01
publisher MDPI AG
record_format Article
series Applied Sciences
spelling doaj-art-d09399d61bb247a4a92bfd38d3950bd52025-01-24T13:20:36ZengMDPI AGApplied Sciences2076-34172025-01-0115272510.3390/app15020725Explainable AI for DeepFake DetectionNazneen Mansoor0Alexander I. Iliev1Berlin School of Technology, SRH Berlin University of Applied Sciences, D-10587 Berlin, GermanyBerlin School of Technology, SRH Berlin University of Applied Sciences, D-10587 Berlin, GermanyThe surge in technological advancements has resulted in concerns over its misuse in politics and entertainment, making reliable detection methods essential. This study introduces a deepfake detection technique that enhances interpretability using the network dissection algorithm. This research consists of two stages: (1) detection of forged images using advanced convolutional neural networks such as ResNet-50, Inception V3, and VGG-16, and (2) applying the network dissection algorithm to understand the models’ internal decision-making processes. The CNNs’ performance is evaluated through F1-scores ranging from 0.8 to 0.9, demonstrating their effectiveness. By analyzing the facial features learned by the models, this study provides explainable results for classifying images as real or fake. This interpretability is crucial in understanding how deepfake detection models operate. Although numerous detection models exist, they often lack transparency in their decision-making processes. This research fills that gap by offering insights into how these models distinguish real from manipulated images. The findings highlight the importance of interpretability in deep neural networks, providing a better understanding of their hierarchical structures and decision processes.https://www.mdpi.com/2076-3417/15/2/725explainable artificial intelligencedeep learningdeepfake detectionexplainabilityconvolutional neural networkVGG-16
spellingShingle Nazneen Mansoor
Alexander I. Iliev
Explainable AI for DeepFake Detection
Applied Sciences
explainable artificial intelligence
deep learning
deepfake detection
explainability
convolutional neural network
VGG-16
title Explainable AI for DeepFake Detection
title_full Explainable AI for DeepFake Detection
title_fullStr Explainable AI for DeepFake Detection
title_full_unstemmed Explainable AI for DeepFake Detection
title_short Explainable AI for DeepFake Detection
title_sort explainable ai for deepfake detection
topic explainable artificial intelligence
deep learning
deepfake detection
explainability
convolutional neural network
VGG-16
url https://www.mdpi.com/2076-3417/15/2/725
work_keys_str_mv AT nazneenmansoor explainableaifordeepfakedetection
AT alexanderiiliev explainableaifordeepfakedetection